Mar 18 08:46:08.405936 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 08:46:09.061362 master-0 kubenswrapper[3986]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:46:09.061362 master-0 kubenswrapper[3986]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 08:46:09.061362 master-0 kubenswrapper[3986]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:46:09.061362 master-0 kubenswrapper[3986]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:46:09.061362 master-0 kubenswrapper[3986]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 08:46:09.061362 master-0 kubenswrapper[3986]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:46:09.063601 master-0 kubenswrapper[3986]: I0318 08:46:09.063290 3986 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 08:46:09.073241 master-0 kubenswrapper[3986]: W0318 08:46:09.073171 3986 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:46:09.073241 master-0 kubenswrapper[3986]: W0318 08:46:09.073208 3986 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:46:09.073241 master-0 kubenswrapper[3986]: W0318 08:46:09.073220 3986 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:46:09.073241 master-0 kubenswrapper[3986]: W0318 08:46:09.073231 3986 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:46:09.073241 master-0 kubenswrapper[3986]: W0318 08:46:09.073240 3986 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:46:09.073241 master-0 kubenswrapper[3986]: W0318 08:46:09.073251 3986 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073261 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073271 3986 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073281 3986 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073291 3986 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073301 3986 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073312 3986 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073321 3986 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073331 3986 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073339 3986 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073347 3986 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073356 3986 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073364 3986 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073373 3986 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073382 3986 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073390 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073399 3986 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073407 3986 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073415 3986 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073424 3986 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:46:09.073478 master-0 kubenswrapper[3986]: W0318 08:46:09.073432 3986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073441 3986 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073449 3986 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073465 3986 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073473 3986 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073482 3986 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073491 3986 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073501 3986 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073510 3986 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073519 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073528 3986 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073536 3986 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073545 3986 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073554 3986 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073562 3986 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073571 3986 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073579 3986 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073588 3986 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073597 3986 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073607 3986 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:46:09.074078 master-0 kubenswrapper[3986]: W0318 08:46:09.073615 3986 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073623 3986 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073632 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073641 3986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073649 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073658 3986 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073666 3986 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073674 3986 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073686 3986 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073698 3986 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073708 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073717 3986 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073727 3986 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073738 3986 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073748 3986 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073757 3986 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073766 3986 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073774 3986 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073786 3986 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:46:09.074985 master-0 kubenswrapper[3986]: W0318 08:46:09.073797 3986 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: W0318 08:46:09.073819 3986 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: W0318 08:46:09.073831 3986 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: W0318 08:46:09.073842 3986 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: W0318 08:46:09.073879 3986 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: W0318 08:46:09.073892 3986 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: W0318 08:46:09.073905 3986 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: W0318 08:46:09.073918 3986 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074104 3986 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074125 3986 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074142 3986 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074155 3986 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074168 3986 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074179 3986 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074192 3986 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074203 3986 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074214 3986 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074224 3986 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074235 3986 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074245 3986 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074258 3986 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 08:46:09.075884 master-0 kubenswrapper[3986]: I0318 08:46:09.074267 3986 flags.go:64] FLAG: --cgroup-root="" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074277 3986 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074287 3986 flags.go:64] FLAG: --client-ca-file="" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074299 3986 flags.go:64] FLAG: --cloud-config="" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074309 3986 flags.go:64] FLAG: --cloud-provider="" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074320 3986 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074338 3986 flags.go:64] FLAG: --cluster-domain="" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074348 3986 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074358 3986 flags.go:64] FLAG: --config-dir="" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074368 3986 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074378 3986 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074390 3986 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074400 3986 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074410 3986 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074420 3986 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074430 3986 flags.go:64] FLAG: --contention-profiling="false" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074440 3986 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074450 3986 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074462 3986 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074471 3986 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074483 3986 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074493 3986 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074503 3986 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074512 3986 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074522 3986 flags.go:64] FLAG: --enable-server="true" Mar 18 08:46:09.076908 master-0 kubenswrapper[3986]: I0318 08:46:09.074533 3986 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074544 3986 flags.go:64] FLAG: --event-burst="100" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074555 3986 flags.go:64] FLAG: --event-qps="50" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074564 3986 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074575 3986 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074586 3986 flags.go:64] FLAG: --eviction-hard="" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074598 3986 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074607 3986 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074617 3986 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074627 3986 flags.go:64] FLAG: --eviction-soft="" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074637 3986 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074646 3986 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074656 3986 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074669 3986 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074678 3986 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074688 3986 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074698 3986 flags.go:64] FLAG: --feature-gates="" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074710 3986 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074721 3986 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074733 3986 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074745 3986 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074758 3986 flags.go:64] FLAG: --healthz-port="10248" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074771 3986 flags.go:64] FLAG: --help="false" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074783 3986 flags.go:64] FLAG: --hostname-override="" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074795 3986 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074808 3986 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 08:46:09.078081 master-0 kubenswrapper[3986]: I0318 08:46:09.074819 3986 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074829 3986 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074838 3986 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074848 3986 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074891 3986 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074903 3986 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074913 3986 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074923 3986 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074933 3986 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074943 3986 flags.go:64] FLAG: --kube-reserved="" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074953 3986 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074963 3986 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074975 3986 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074985 3986 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.074998 3986 flags.go:64] FLAG: --lock-file="" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075008 3986 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075021 3986 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075034 3986 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075068 3986 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075086 3986 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075098 3986 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075111 3986 flags.go:64] FLAG: --logging-format="text" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075122 3986 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075133 3986 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075143 3986 flags.go:64] FLAG: --manifest-url="" Mar 18 08:46:09.079468 master-0 kubenswrapper[3986]: I0318 08:46:09.075152 3986 flags.go:64] FLAG: --manifest-url-header="" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075171 3986 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075181 3986 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075193 3986 flags.go:64] FLAG: --max-pods="110" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075203 3986 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075213 3986 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075223 3986 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075232 3986 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075241 3986 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075251 3986 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075261 3986 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075285 3986 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075295 3986 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075304 3986 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075314 3986 flags.go:64] FLAG: --pod-cidr="" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075324 3986 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075338 3986 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075347 3986 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075357 3986 flags.go:64] FLAG: --pods-per-core="0" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075368 3986 flags.go:64] FLAG: --port="10250" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075378 3986 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075388 3986 flags.go:64] FLAG: --provider-id="" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075397 3986 flags.go:64] FLAG: --qos-reserved="" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075407 3986 flags.go:64] FLAG: --read-only-port="10255" Mar 18 08:46:09.080906 master-0 kubenswrapper[3986]: I0318 08:46:09.075417 3986 flags.go:64] FLAG: --register-node="true" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075428 3986 flags.go:64] FLAG: --register-schedulable="true" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075438 3986 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075467 3986 flags.go:64] FLAG: --registry-burst="10" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075477 3986 flags.go:64] FLAG: --registry-qps="5" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075487 3986 flags.go:64] FLAG: --reserved-cpus="" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075496 3986 flags.go:64] FLAG: --reserved-memory="" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075508 3986 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075519 3986 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075528 3986 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075538 3986 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075547 3986 flags.go:64] FLAG: --runonce="false" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075557 3986 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075568 3986 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075578 3986 flags.go:64] FLAG: --seccomp-default="false" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075587 3986 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075597 3986 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075607 3986 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075618 3986 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075628 3986 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075637 3986 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075647 3986 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075657 3986 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075667 3986 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075677 3986 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 08:46:09.082248 master-0 kubenswrapper[3986]: I0318 08:46:09.075688 3986 flags.go:64] FLAG: --system-cgroups="" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075699 3986 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075718 3986 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075730 3986 flags.go:64] FLAG: --tls-cert-file="" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075743 3986 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075760 3986 flags.go:64] FLAG: --tls-min-version="" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075772 3986 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075784 3986 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075796 3986 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075810 3986 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075827 3986 flags.go:64] FLAG: --v="2" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075842 3986 flags.go:64] FLAG: --version="false" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075899 3986 flags.go:64] FLAG: --vmodule="" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075915 3986 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: I0318 08:46:09.075930 3986 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076229 3986 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076246 3986 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076257 3986 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076271 3986 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076283 3986 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076295 3986 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076306 3986 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076318 3986 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:46:09.083446 master-0 kubenswrapper[3986]: W0318 08:46:09.076330 3986 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076343 3986 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076355 3986 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076366 3986 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076377 3986 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076393 3986 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076408 3986 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076421 3986 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076433 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076445 3986 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076458 3986 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076470 3986 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076482 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076494 3986 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076509 3986 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076523 3986 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076535 3986 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076547 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076559 3986 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:46:09.084500 master-0 kubenswrapper[3986]: W0318 08:46:09.076576 3986 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076587 3986 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076599 3986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076610 3986 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076621 3986 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076632 3986 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076647 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076658 3986 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076669 3986 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076680 3986 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076691 3986 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076702 3986 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076713 3986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076723 3986 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076734 3986 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076744 3986 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076756 3986 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076767 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076780 3986 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:46:09.085441 master-0 kubenswrapper[3986]: W0318 08:46:09.076795 3986 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076806 3986 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076818 3986 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076829 3986 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076842 3986 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076910 3986 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076924 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076936 3986 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076947 3986 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076956 3986 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076964 3986 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076976 3986 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.076988 3986 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.077002 3986 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.077010 3986 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.077019 3986 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.077029 3986 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.077037 3986 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.077045 3986 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.077054 3986 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:46:09.086683 master-0 kubenswrapper[3986]: W0318 08:46:09.077063 3986 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:46:09.087609 master-0 kubenswrapper[3986]: W0318 08:46:09.077071 3986 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:46:09.087609 master-0 kubenswrapper[3986]: W0318 08:46:09.077079 3986 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:46:09.087609 master-0 kubenswrapper[3986]: W0318 08:46:09.077089 3986 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:46:09.087609 master-0 kubenswrapper[3986]: W0318 08:46:09.077097 3986 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:46:09.087609 master-0 kubenswrapper[3986]: W0318 08:46:09.077106 3986 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:46:09.087609 master-0 kubenswrapper[3986]: I0318 08:46:09.077133 3986 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:46:09.091073 master-0 kubenswrapper[3986]: I0318 08:46:09.090998 3986 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 08:46:09.091073 master-0 kubenswrapper[3986]: I0318 08:46:09.091054 3986 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091180 3986 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091194 3986 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091206 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091214 3986 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091223 3986 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091231 3986 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091242 3986 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091251 3986 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091259 3986 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:46:09.091255 master-0 kubenswrapper[3986]: W0318 08:46:09.091268 3986 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091279 3986 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091292 3986 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091301 3986 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091309 3986 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091317 3986 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091326 3986 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091334 3986 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091344 3986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091352 3986 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091360 3986 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091369 3986 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091377 3986 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091385 3986 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091394 3986 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091403 3986 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091413 3986 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091421 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091429 3986 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:46:09.091695 master-0 kubenswrapper[3986]: W0318 08:46:09.091437 3986 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091445 3986 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091453 3986 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091461 3986 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091470 3986 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091478 3986 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091486 3986 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091494 3986 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091501 3986 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091509 3986 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091517 3986 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091527 3986 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091537 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091547 3986 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091557 3986 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091570 3986 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091580 3986 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091590 3986 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091601 3986 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091611 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:46:09.092673 master-0 kubenswrapper[3986]: W0318 08:46:09.091620 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091631 3986 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091641 3986 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091651 3986 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091660 3986 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091668 3986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091676 3986 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091687 3986 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091697 3986 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091708 3986 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091718 3986 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091728 3986 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091764 3986 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091774 3986 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091782 3986 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091791 3986 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091799 3986 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091807 3986 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091815 3986 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:46:09.093716 master-0 kubenswrapper[3986]: W0318 08:46:09.091823 3986 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.091831 3986 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.091838 3986 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.091846 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.091881 3986 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: I0318 08:46:09.091896 3986 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092124 3986 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092136 3986 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092145 3986 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092154 3986 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092162 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092170 3986 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092178 3986 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092186 3986 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092194 3986 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:46:09.095040 master-0 kubenswrapper[3986]: W0318 08:46:09.092202 3986 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092209 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092217 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092225 3986 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092233 3986 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092241 3986 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092248 3986 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092256 3986 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092264 3986 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092271 3986 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092279 3986 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092290 3986 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092300 3986 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092310 3986 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092318 3986 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092327 3986 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092335 3986 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092343 3986 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092351 3986 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092359 3986 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:46:09.095722 master-0 kubenswrapper[3986]: W0318 08:46:09.092366 3986 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092374 3986 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092383 3986 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092391 3986 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092399 3986 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092407 3986 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092415 3986 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092423 3986 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092433 3986 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092443 3986 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092452 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092460 3986 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092469 3986 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092476 3986 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092484 3986 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092492 3986 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092500 3986 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092508 3986 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092516 3986 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092524 3986 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:46:09.096758 master-0 kubenswrapper[3986]: W0318 08:46:09.092532 3986 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092539 3986 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092547 3986 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092555 3986 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092562 3986 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092570 3986 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092578 3986 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092586 3986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092593 3986 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092604 3986 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092613 3986 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092623 3986 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092631 3986 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092640 3986 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092650 3986 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092658 3986 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092666 3986 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092673 3986 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092685 3986 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:46:09.097795 master-0 kubenswrapper[3986]: W0318 08:46:09.092694 3986 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:46:09.098827 master-0 kubenswrapper[3986]: W0318 08:46:09.092704 3986 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:46:09.098827 master-0 kubenswrapper[3986]: W0318 08:46:09.092713 3986 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:46:09.098827 master-0 kubenswrapper[3986]: W0318 08:46:09.092722 3986 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:46:09.098827 master-0 kubenswrapper[3986]: I0318 08:46:09.092734 3986 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:46:09.098827 master-0 kubenswrapper[3986]: I0318 08:46:09.094019 3986 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 08:46:09.098827 master-0 kubenswrapper[3986]: I0318 08:46:09.098102 3986 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 18 08:46:09.099422 master-0 kubenswrapper[3986]: I0318 08:46:09.099376 3986 server.go:997] "Starting client certificate rotation" Mar 18 08:46:09.099422 master-0 kubenswrapper[3986]: I0318 08:46:09.099404 3986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 08:46:09.099661 master-0 kubenswrapper[3986]: I0318 08:46:09.099593 3986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 08:46:09.130490 master-0 kubenswrapper[3986]: I0318 08:46:09.130398 3986 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:46:09.136829 master-0 kubenswrapper[3986]: E0318 08:46:09.136734 3986 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:09.138375 master-0 kubenswrapper[3986]: I0318 08:46:09.138312 3986 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:46:09.164015 master-0 kubenswrapper[3986]: I0318 08:46:09.163919 3986 log.go:25] "Validated CRI v1 runtime API" Mar 18 08:46:09.169751 master-0 kubenswrapper[3986]: I0318 08:46:09.169709 3986 log.go:25] "Validated CRI v1 image API" Mar 18 08:46:09.172936 master-0 kubenswrapper[3986]: I0318 08:46:09.172824 3986 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 08:46:09.179149 master-0 kubenswrapper[3986]: I0318 08:46:09.178424 3986 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 9d22b218-6091-4693-b191-06a05a0aba6f:/dev/vda3] Mar 18 08:46:09.179149 master-0 kubenswrapper[3986]: I0318 08:46:09.178474 3986 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 18 08:46:09.210054 master-0 kubenswrapper[3986]: I0318 08:46:09.209534 3986 manager.go:217] Machine: {Timestamp:2026-03-18 08:46:09.206223667 +0000 UTC m=+0.613393829 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:462ae4bbdf8a4211a5b04e094f4702bb SystemUUID:462ae4bb-df8a-4211-a5b0-4e094f4702bb BootID:8f184f3d-61e6-4234-a551-2580e849051e Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:cd:49:09 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:76:d1:4e:31:92:01 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 08:46:09.210054 master-0 kubenswrapper[3986]: I0318 08:46:09.209956 3986 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 08:46:09.210300 master-0 kubenswrapper[3986]: I0318 08:46:09.210185 3986 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 08:46:09.211800 master-0 kubenswrapper[3986]: I0318 08:46:09.211754 3986 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 08:46:09.212146 master-0 kubenswrapper[3986]: I0318 08:46:09.212083 3986 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 08:46:09.212426 master-0 kubenswrapper[3986]: I0318 08:46:09.212135 3986 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 08:46:09.212492 master-0 kubenswrapper[3986]: I0318 08:46:09.212446 3986 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 08:46:09.212492 master-0 kubenswrapper[3986]: I0318 08:46:09.212462 3986 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 08:46:09.212594 master-0 kubenswrapper[3986]: I0318 08:46:09.212559 3986 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 08:46:09.212640 master-0 kubenswrapper[3986]: I0318 08:46:09.212597 3986 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 08:46:09.212816 master-0 kubenswrapper[3986]: I0318 08:46:09.212783 3986 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:46:09.212970 master-0 kubenswrapper[3986]: I0318 08:46:09.212938 3986 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 08:46:09.217181 master-0 kubenswrapper[3986]: I0318 08:46:09.217138 3986 kubelet.go:418] "Attempting to sync node with API server" Mar 18 08:46:09.217181 master-0 kubenswrapper[3986]: I0318 08:46:09.217173 3986 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 08:46:09.217292 master-0 kubenswrapper[3986]: I0318 08:46:09.217207 3986 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 08:46:09.217292 master-0 kubenswrapper[3986]: I0318 08:46:09.217228 3986 kubelet.go:324] "Adding apiserver pod source" Mar 18 08:46:09.217292 master-0 kubenswrapper[3986]: I0318 08:46:09.217252 3986 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 08:46:09.223213 master-0 kubenswrapper[3986]: I0318 08:46:09.223158 3986 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 08:46:09.225676 master-0 kubenswrapper[3986]: W0318 08:46:09.225565 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:09.225796 master-0 kubenswrapper[3986]: E0318 08:46:09.225718 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:09.225904 master-0 kubenswrapper[3986]: W0318 08:46:09.225567 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:09.225904 master-0 kubenswrapper[3986]: E0318 08:46:09.225848 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:09.227142 master-0 kubenswrapper[3986]: I0318 08:46:09.227095 3986 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 08:46:09.227474 master-0 kubenswrapper[3986]: I0318 08:46:09.227434 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 08:46:09.227474 master-0 kubenswrapper[3986]: I0318 08:46:09.227475 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 08:46:09.227558 master-0 kubenswrapper[3986]: I0318 08:46:09.227491 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 08:46:09.227558 master-0 kubenswrapper[3986]: I0318 08:46:09.227506 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 08:46:09.227558 master-0 kubenswrapper[3986]: I0318 08:46:09.227518 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 08:46:09.227558 master-0 kubenswrapper[3986]: I0318 08:46:09.227531 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 08:46:09.227558 master-0 kubenswrapper[3986]: I0318 08:46:09.227545 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 08:46:09.227558 master-0 kubenswrapper[3986]: I0318 08:46:09.227560 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 08:46:09.227771 master-0 kubenswrapper[3986]: I0318 08:46:09.227576 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 08:46:09.227771 master-0 kubenswrapper[3986]: I0318 08:46:09.227590 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 08:46:09.227771 master-0 kubenswrapper[3986]: I0318 08:46:09.227609 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 08:46:09.227771 master-0 kubenswrapper[3986]: I0318 08:46:09.227648 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 08:46:09.228738 master-0 kubenswrapper[3986]: I0318 08:46:09.228697 3986 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 08:46:09.229515 master-0 kubenswrapper[3986]: I0318 08:46:09.229476 3986 server.go:1280] "Started kubelet" Mar 18 08:46:09.231070 master-0 kubenswrapper[3986]: I0318 08:46:09.230935 3986 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 08:46:09.231153 master-0 kubenswrapper[3986]: I0318 08:46:09.231118 3986 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 08:46:09.231258 master-0 kubenswrapper[3986]: I0318 08:46:09.230955 3986 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 08:46:09.231989 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 08:46:09.232324 master-0 kubenswrapper[3986]: I0318 08:46:09.232068 3986 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 08:46:09.232405 master-0 kubenswrapper[3986]: I0318 08:46:09.232339 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:09.239720 master-0 kubenswrapper[3986]: I0318 08:46:09.239651 3986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 08:46:09.239782 master-0 kubenswrapper[3986]: I0318 08:46:09.239770 3986 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 08:46:09.240262 master-0 kubenswrapper[3986]: E0318 08:46:09.240208 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:09.240733 master-0 kubenswrapper[3986]: I0318 08:46:09.240688 3986 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 08:46:09.240956 master-0 kubenswrapper[3986]: I0318 08:46:09.240926 3986 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 08:46:09.241178 master-0 kubenswrapper[3986]: I0318 08:46:09.241142 3986 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 08:46:09.242360 master-0 kubenswrapper[3986]: I0318 08:46:09.242156 3986 reconstruct.go:97] "Volume reconstruction finished" Mar 18 08:46:09.242415 master-0 kubenswrapper[3986]: I0318 08:46:09.242361 3986 reconciler.go:26] "Reconciler: start to sync state" Mar 18 08:46:09.242784 master-0 kubenswrapper[3986]: I0318 08:46:09.242747 3986 factory.go:55] Registering systemd factory Mar 18 08:46:09.242828 master-0 kubenswrapper[3986]: I0318 08:46:09.242792 3986 factory.go:221] Registration of the systemd container factory successfully Mar 18 08:46:09.243100 master-0 kubenswrapper[3986]: E0318 08:46:09.241628 3986 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de3235a22b7cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.229428685 +0000 UTC m=+0.636598807,LastTimestamp:2026-03-18 08:46:09.229428685 +0000 UTC m=+0.636598807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:09.243543 master-0 kubenswrapper[3986]: I0318 08:46:09.243508 3986 server.go:449] "Adding debug handlers to kubelet server" Mar 18 08:46:09.243645 master-0 kubenswrapper[3986]: I0318 08:46:09.243616 3986 factory.go:153] Registering CRI-O factory Mar 18 08:46:09.243719 master-0 kubenswrapper[3986]: I0318 08:46:09.243706 3986 factory.go:221] Registration of the crio container factory successfully Mar 18 08:46:09.243897 master-0 kubenswrapper[3986]: I0318 08:46:09.243872 3986 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 08:46:09.243996 master-0 kubenswrapper[3986]: I0318 08:46:09.243983 3986 factory.go:103] Registering Raw factory Mar 18 08:46:09.244106 master-0 kubenswrapper[3986]: E0318 08:46:09.244037 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 08:46:09.244106 master-0 kubenswrapper[3986]: I0318 08:46:09.244075 3986 manager.go:1196] Started watching for new ooms in manager Mar 18 08:46:09.244183 master-0 kubenswrapper[3986]: W0318 08:46:09.243924 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:09.244309 master-0 kubenswrapper[3986]: E0318 08:46:09.244263 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:09.244512 master-0 kubenswrapper[3986]: E0318 08:46:09.244415 3986 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 08:46:09.247545 master-0 kubenswrapper[3986]: I0318 08:46:09.247494 3986 manager.go:319] Starting recovery of all containers Mar 18 08:46:09.276478 master-0 kubenswrapper[3986]: I0318 08:46:09.275987 3986 manager.go:324] Recovery completed Mar 18 08:46:09.296410 master-0 kubenswrapper[3986]: I0318 08:46:09.296336 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.298889 master-0 kubenswrapper[3986]: I0318 08:46:09.298768 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.298982 master-0 kubenswrapper[3986]: I0318 08:46:09.298930 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.298982 master-0 kubenswrapper[3986]: I0318 08:46:09.298960 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.300225 master-0 kubenswrapper[3986]: I0318 08:46:09.300196 3986 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 08:46:09.300341 master-0 kubenswrapper[3986]: I0318 08:46:09.300320 3986 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 08:46:09.300471 master-0 kubenswrapper[3986]: I0318 08:46:09.300453 3986 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:46:09.306645 master-0 kubenswrapper[3986]: I0318 08:46:09.306625 3986 policy_none.go:49] "None policy: Start" Mar 18 08:46:09.307454 master-0 kubenswrapper[3986]: I0318 08:46:09.307421 3986 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 08:46:09.307543 master-0 kubenswrapper[3986]: I0318 08:46:09.307465 3986 state_mem.go:35] "Initializing new in-memory state store" Mar 18 08:46:09.340401 master-0 kubenswrapper[3986]: E0318 08:46:09.340360 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:09.378569 master-0 kubenswrapper[3986]: I0318 08:46:09.378423 3986 manager.go:334] "Starting Device Plugin manager" Mar 18 08:46:09.378569 master-0 kubenswrapper[3986]: I0318 08:46:09.378495 3986 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 08:46:09.378569 master-0 kubenswrapper[3986]: I0318 08:46:09.378511 3986 server.go:79] "Starting device plugin registration server" Mar 18 08:46:09.379119 master-0 kubenswrapper[3986]: I0318 08:46:09.379071 3986 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 08:46:09.379225 master-0 kubenswrapper[3986]: I0318 08:46:09.379096 3986 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 08:46:09.379416 master-0 kubenswrapper[3986]: I0318 08:46:09.379329 3986 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 08:46:09.379639 master-0 kubenswrapper[3986]: I0318 08:46:09.379592 3986 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 08:46:09.379639 master-0 kubenswrapper[3986]: I0318 08:46:09.379616 3986 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 08:46:09.384113 master-0 kubenswrapper[3986]: E0318 08:46:09.384044 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:09.423550 master-0 kubenswrapper[3986]: I0318 08:46:09.423441 3986 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 08:46:09.431417 master-0 kubenswrapper[3986]: I0318 08:46:09.426214 3986 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 08:46:09.431417 master-0 kubenswrapper[3986]: I0318 08:46:09.426299 3986 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 08:46:09.431417 master-0 kubenswrapper[3986]: I0318 08:46:09.426332 3986 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 08:46:09.431417 master-0 kubenswrapper[3986]: E0318 08:46:09.426405 3986 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 18 08:46:09.431417 master-0 kubenswrapper[3986]: W0318 08:46:09.427753 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:09.431417 master-0 kubenswrapper[3986]: E0318 08:46:09.427902 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:09.445778 master-0 kubenswrapper[3986]: E0318 08:46:09.445721 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 08:46:09.479991 master-0 kubenswrapper[3986]: I0318 08:46:09.479827 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.481084 master-0 kubenswrapper[3986]: I0318 08:46:09.481050 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.481207 master-0 kubenswrapper[3986]: I0318 08:46:09.481102 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.481207 master-0 kubenswrapper[3986]: I0318 08:46:09.481115 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.481207 master-0 kubenswrapper[3986]: I0318 08:46:09.481149 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:09.482235 master-0 kubenswrapper[3986]: E0318 08:46:09.482172 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:09.527543 master-0 kubenswrapper[3986]: I0318 08:46:09.527426 3986 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 08:46:09.527733 master-0 kubenswrapper[3986]: I0318 08:46:09.527559 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.529076 master-0 kubenswrapper[3986]: I0318 08:46:09.529008 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.529076 master-0 kubenswrapper[3986]: I0318 08:46:09.529072 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.529250 master-0 kubenswrapper[3986]: I0318 08:46:09.529087 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.529250 master-0 kubenswrapper[3986]: I0318 08:46:09.529245 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.530006 master-0 kubenswrapper[3986]: I0318 08:46:09.529669 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:09.530006 master-0 kubenswrapper[3986]: I0318 08:46:09.529752 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.530645 master-0 kubenswrapper[3986]: I0318 08:46:09.530151 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.530645 master-0 kubenswrapper[3986]: I0318 08:46:09.530170 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.530645 master-0 kubenswrapper[3986]: I0318 08:46:09.530182 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.530645 master-0 kubenswrapper[3986]: I0318 08:46:09.530262 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.530645 master-0 kubenswrapper[3986]: I0318 08:46:09.530474 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:09.530645 master-0 kubenswrapper[3986]: I0318 08:46:09.530507 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.531025 master-0 kubenswrapper[3986]: I0318 08:46:09.530746 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.531025 master-0 kubenswrapper[3986]: I0318 08:46:09.530815 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.531025 master-0 kubenswrapper[3986]: I0318 08:46:09.530899 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.531194 master-0 kubenswrapper[3986]: I0318 08:46:09.531138 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.531194 master-0 kubenswrapper[3986]: I0318 08:46:09.531176 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.531341 master-0 kubenswrapper[3986]: I0318 08:46:09.531197 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.531436 master-0 kubenswrapper[3986]: I0318 08:46:09.531370 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.531734 master-0 kubenswrapper[3986]: I0318 08:46:09.531651 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.531734 master-0 kubenswrapper[3986]: I0318 08:46:09.531676 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.531734 master-0 kubenswrapper[3986]: I0318 08:46:09.531679 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.531734 master-0 kubenswrapper[3986]: I0318 08:46:09.531732 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.532034 master-0 kubenswrapper[3986]: I0318 08:46:09.531687 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.533068 master-0 kubenswrapper[3986]: I0318 08:46:09.532938 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.533068 master-0 kubenswrapper[3986]: I0318 08:46:09.532993 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.533068 master-0 kubenswrapper[3986]: I0318 08:46:09.532997 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.533068 master-0 kubenswrapper[3986]: I0318 08:46:09.533053 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.533068 master-0 kubenswrapper[3986]: I0318 08:46:09.533073 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.533639 master-0 kubenswrapper[3986]: I0318 08:46:09.533010 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.533639 master-0 kubenswrapper[3986]: I0318 08:46:09.533378 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.533639 master-0 kubenswrapper[3986]: I0318 08:46:09.533604 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.533818 master-0 kubenswrapper[3986]: I0318 08:46:09.533647 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.534816 master-0 kubenswrapper[3986]: I0318 08:46:09.534723 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.534816 master-0 kubenswrapper[3986]: I0318 08:46:09.534755 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.534816 master-0 kubenswrapper[3986]: I0318 08:46:09.534772 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.535140 master-0 kubenswrapper[3986]: I0318 08:46:09.534833 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.535140 master-0 kubenswrapper[3986]: I0318 08:46:09.534891 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.535140 master-0 kubenswrapper[3986]: I0318 08:46:09.534908 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.535140 master-0 kubenswrapper[3986]: I0318 08:46:09.534955 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:09.535140 master-0 kubenswrapper[3986]: I0318 08:46:09.534983 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.536177 master-0 kubenswrapper[3986]: I0318 08:46:09.536047 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.536177 master-0 kubenswrapper[3986]: I0318 08:46:09.536077 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.536177 master-0 kubenswrapper[3986]: I0318 08:46:09.536088 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544019 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544084 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544137 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544187 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544233 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544277 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544320 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544364 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544405 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:09.544439 master-0 kubenswrapper[3986]: I0318 08:46:09.544447 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.545241 master-0 kubenswrapper[3986]: I0318 08:46:09.544497 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.545241 master-0 kubenswrapper[3986]: I0318 08:46:09.544542 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.545241 master-0 kubenswrapper[3986]: I0318 08:46:09.544583 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.545241 master-0 kubenswrapper[3986]: I0318 08:46:09.544632 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.545241 master-0 kubenswrapper[3986]: I0318 08:46:09.544676 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.545241 master-0 kubenswrapper[3986]: I0318 08:46:09.544722 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.545241 master-0 kubenswrapper[3986]: I0318 08:46:09.544774 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.645323 master-0 kubenswrapper[3986]: I0318 08:46:09.645269 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.645323 master-0 kubenswrapper[3986]: I0318 08:46:09.645340 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.645626 master-0 kubenswrapper[3986]: I0318 08:46:09.645408 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.645626 master-0 kubenswrapper[3986]: I0318 08:46:09.645545 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.645779 master-0 kubenswrapper[3986]: I0318 08:46:09.645666 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.645779 master-0 kubenswrapper[3986]: I0318 08:46:09.645730 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.645959 master-0 kubenswrapper[3986]: I0318 08:46:09.645783 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.645959 master-0 kubenswrapper[3986]: I0318 08:46:09.645882 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.645959 master-0 kubenswrapper[3986]: I0318 08:46:09.645935 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:09.646139 master-0 kubenswrapper[3986]: I0318 08:46:09.645982 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:09.646139 master-0 kubenswrapper[3986]: I0318 08:46:09.646034 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:09.646139 master-0 kubenswrapper[3986]: I0318 08:46:09.646081 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:09.646314 master-0 kubenswrapper[3986]: I0318 08:46:09.646238 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.646419 master-0 kubenswrapper[3986]: I0318 08:46:09.646344 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.646515 master-0 kubenswrapper[3986]: I0318 08:46:09.646469 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.646614 master-0 kubenswrapper[3986]: I0318 08:46:09.646578 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.646680 master-0 kubenswrapper[3986]: I0318 08:46:09.646648 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.646758 master-0 kubenswrapper[3986]: I0318 08:46:09.646698 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.646829 master-0 kubenswrapper[3986]: I0318 08:46:09.646756 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.646829 master-0 kubenswrapper[3986]: I0318 08:46:09.646809 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:09.646983 master-0 kubenswrapper[3986]: I0318 08:46:09.646894 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:09.646983 master-0 kubenswrapper[3986]: I0318 08:46:09.646943 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:09.647098 master-0 kubenswrapper[3986]: I0318 08:46:09.646968 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.647098 master-0 kubenswrapper[3986]: I0318 08:46:09.646960 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.647098 master-0 kubenswrapper[3986]: I0318 08:46:09.647039 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:09.647098 master-0 kubenswrapper[3986]: I0318 08:46:09.647077 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.647098 master-0 kubenswrapper[3986]: I0318 08:46:09.647033 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:09.647382 master-0 kubenswrapper[3986]: I0318 08:46:09.647085 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.647382 master-0 kubenswrapper[3986]: I0318 08:46:09.647121 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.647382 master-0 kubenswrapper[3986]: I0318 08:46:09.647155 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.647382 master-0 kubenswrapper[3986]: I0318 08:46:09.647168 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:09.647382 master-0 kubenswrapper[3986]: I0318 08:46:09.647213 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:09.647382 master-0 kubenswrapper[3986]: I0318 08:46:09.647275 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:09.647382 master-0 kubenswrapper[3986]: I0318 08:46:09.647338 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:09.648830 master-0 kubenswrapper[3986]: I0318 08:46:09.648757 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:09.682562 master-0 kubenswrapper[3986]: I0318 08:46:09.682460 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:09.684495 master-0 kubenswrapper[3986]: I0318 08:46:09.684427 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:09.684600 master-0 kubenswrapper[3986]: I0318 08:46:09.684511 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:09.684600 master-0 kubenswrapper[3986]: I0318 08:46:09.684541 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:09.684724 master-0 kubenswrapper[3986]: I0318 08:46:09.684625 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:09.686141 master-0 kubenswrapper[3986]: E0318 08:46:09.685980 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:09.847258 master-0 kubenswrapper[3986]: E0318 08:46:09.847020 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 08:46:09.882490 master-0 kubenswrapper[3986]: I0318 08:46:09.882383 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:09.908130 master-0 kubenswrapper[3986]: I0318 08:46:09.908006 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:09.922609 master-0 kubenswrapper[3986]: I0318 08:46:09.922542 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:09.943002 master-0 kubenswrapper[3986]: I0318 08:46:09.942937 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:10.050549 master-0 kubenswrapper[3986]: W0318 08:46:10.050411 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:10.050549 master-0 kubenswrapper[3986]: E0318 08:46:10.050546 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:10.087265 master-0 kubenswrapper[3986]: I0318 08:46:10.087121 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:10.088712 master-0 kubenswrapper[3986]: I0318 08:46:10.088658 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:10.088712 master-0 kubenswrapper[3986]: I0318 08:46:10.088706 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:10.088891 master-0 kubenswrapper[3986]: I0318 08:46:10.088723 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:10.088891 master-0 kubenswrapper[3986]: I0318 08:46:10.088816 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:10.090194 master-0 kubenswrapper[3986]: E0318 08:46:10.090108 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:10.174126 master-0 kubenswrapper[3986]: W0318 08:46:10.173918 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:10.174126 master-0 kubenswrapper[3986]: E0318 08:46:10.174029 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:10.234493 master-0 kubenswrapper[3986]: I0318 08:46:10.234347 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:10.449024 master-0 kubenswrapper[3986]: W0318 08:46:10.448721 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:10.449024 master-0 kubenswrapper[3986]: E0318 08:46:10.448910 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:10.583481 master-0 kubenswrapper[3986]: W0318 08:46:10.583374 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a WatchSource:0}: Error finding container bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a: Status 404 returned error can't find the container with id bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a Mar 18 08:46:10.585979 master-0 kubenswrapper[3986]: W0318 08:46:10.585845 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-c10d1b81b0a7054da8fb12459aa720b7916f5484be5a832bdacdc31fad36d2cc WatchSource:0}: Error finding container c10d1b81b0a7054da8fb12459aa720b7916f5484be5a832bdacdc31fad36d2cc: Status 404 returned error can't find the container with id c10d1b81b0a7054da8fb12459aa720b7916f5484be5a832bdacdc31fad36d2cc Mar 18 08:46:10.594771 master-0 kubenswrapper[3986]: I0318 08:46:10.594627 3986 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 08:46:10.628637 master-0 kubenswrapper[3986]: W0318 08:46:10.628573 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-4d17f4a7fe14a2a472c626baa31e2712ee04373a3644e0529ddf244e8afaa854 WatchSource:0}: Error finding container 4d17f4a7fe14a2a472c626baa31e2712ee04373a3644e0529ddf244e8afaa854: Status 404 returned error can't find the container with id 4d17f4a7fe14a2a472c626baa31e2712ee04373a3644e0529ddf244e8afaa854 Mar 18 08:46:10.648089 master-0 kubenswrapper[3986]: E0318 08:46:10.648025 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 08:46:10.668763 master-0 kubenswrapper[3986]: W0318 08:46:10.668678 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb WatchSource:0}: Error finding container 65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb: Status 404 returned error can't find the container with id 65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb Mar 18 08:46:10.719472 master-0 kubenswrapper[3986]: W0318 08:46:10.719376 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1 WatchSource:0}: Error finding container dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1: Status 404 returned error can't find the container with id dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1 Mar 18 08:46:10.890526 master-0 kubenswrapper[3986]: I0318 08:46:10.890339 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:10.892902 master-0 kubenswrapper[3986]: I0318 08:46:10.892811 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:10.893054 master-0 kubenswrapper[3986]: I0318 08:46:10.892922 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:10.893054 master-0 kubenswrapper[3986]: I0318 08:46:10.892949 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:10.893257 master-0 kubenswrapper[3986]: I0318 08:46:10.893057 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:10.894834 master-0 kubenswrapper[3986]: E0318 08:46:10.894723 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:10.907584 master-0 kubenswrapper[3986]: W0318 08:46:10.907464 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:10.907758 master-0 kubenswrapper[3986]: E0318 08:46:10.907599 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:11.234386 master-0 kubenswrapper[3986]: I0318 08:46:11.234292 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:11.237293 master-0 kubenswrapper[3986]: I0318 08:46:11.237225 3986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 08:46:11.239538 master-0 kubenswrapper[3986]: E0318 08:46:11.239460 3986 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:11.434138 master-0 kubenswrapper[3986]: I0318 08:46:11.434026 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1"} Mar 18 08:46:11.435420 master-0 kubenswrapper[3986]: I0318 08:46:11.435371 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb"} Mar 18 08:46:11.436452 master-0 kubenswrapper[3986]: I0318 08:46:11.436424 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"4d17f4a7fe14a2a472c626baa31e2712ee04373a3644e0529ddf244e8afaa854"} Mar 18 08:46:11.437691 master-0 kubenswrapper[3986]: I0318 08:46:11.437655 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a"} Mar 18 08:46:11.439039 master-0 kubenswrapper[3986]: I0318 08:46:11.439003 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"c10d1b81b0a7054da8fb12459aa720b7916f5484be5a832bdacdc31fad36d2cc"} Mar 18 08:46:12.234475 master-0 kubenswrapper[3986]: I0318 08:46:12.234177 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:12.249126 master-0 kubenswrapper[3986]: E0318 08:46:12.249091 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 08:46:12.443970 master-0 kubenswrapper[3986]: I0318 08:46:12.443921 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"60b7a6828ff9115f3e360da4ea3b39ddb71f9d86fc37454c4e2b71253e2b011f"} Mar 18 08:46:12.444119 master-0 kubenswrapper[3986]: I0318 08:46:12.444051 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:12.446154 master-0 kubenswrapper[3986]: I0318 08:46:12.446126 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:12.446237 master-0 kubenswrapper[3986]: I0318 08:46:12.446161 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:12.446237 master-0 kubenswrapper[3986]: I0318 08:46:12.446179 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:12.495827 master-0 kubenswrapper[3986]: I0318 08:46:12.495773 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:12.497270 master-0 kubenswrapper[3986]: I0318 08:46:12.497238 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:12.497270 master-0 kubenswrapper[3986]: I0318 08:46:12.497274 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:12.497390 master-0 kubenswrapper[3986]: I0318 08:46:12.497283 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:12.497390 master-0 kubenswrapper[3986]: I0318 08:46:12.497348 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:12.498044 master-0 kubenswrapper[3986]: E0318 08:46:12.498007 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:12.648102 master-0 kubenswrapper[3986]: W0318 08:46:12.648047 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:12.648275 master-0 kubenswrapper[3986]: E0318 08:46:12.648139 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:12.811521 master-0 kubenswrapper[3986]: W0318 08:46:12.811396 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:12.811521 master-0 kubenswrapper[3986]: E0318 08:46:12.811475 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:12.928546 master-0 kubenswrapper[3986]: W0318 08:46:12.928474 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:12.928744 master-0 kubenswrapper[3986]: E0318 08:46:12.928553 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:13.235389 master-0 kubenswrapper[3986]: I0318 08:46:13.235352 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:13.448475 master-0 kubenswrapper[3986]: I0318 08:46:13.448436 3986 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="60b7a6828ff9115f3e360da4ea3b39ddb71f9d86fc37454c4e2b71253e2b011f" exitCode=0 Mar 18 08:46:13.448574 master-0 kubenswrapper[3986]: I0318 08:46:13.448479 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"60b7a6828ff9115f3e360da4ea3b39ddb71f9d86fc37454c4e2b71253e2b011f"} Mar 18 08:46:13.448614 master-0 kubenswrapper[3986]: I0318 08:46:13.448587 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:13.449846 master-0 kubenswrapper[3986]: I0318 08:46:13.449820 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:13.449928 master-0 kubenswrapper[3986]: I0318 08:46:13.449868 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:13.449928 master-0 kubenswrapper[3986]: I0318 08:46:13.449878 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:13.601918 master-0 kubenswrapper[3986]: W0318 08:46:13.601793 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:13.602055 master-0 kubenswrapper[3986]: E0318 08:46:13.601950 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:14.233895 master-0 kubenswrapper[3986]: I0318 08:46:14.233801 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:14.497212 master-0 kubenswrapper[3986]: I0318 08:46:14.497056 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 08:46:14.498192 master-0 kubenswrapper[3986]: I0318 08:46:14.497499 3986 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="124e03a6c73c1a5f696eaaf068fed389ef768c17072ff985e986143a785ef67a" exitCode=1 Mar 18 08:46:14.498192 master-0 kubenswrapper[3986]: I0318 08:46:14.497583 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"124e03a6c73c1a5f696eaaf068fed389ef768c17072ff985e986143a785ef67a"} Mar 18 08:46:14.498192 master-0 kubenswrapper[3986]: I0318 08:46:14.497632 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:14.499654 master-0 kubenswrapper[3986]: I0318 08:46:14.499602 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:14.499654 master-0 kubenswrapper[3986]: I0318 08:46:14.499651 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:14.499875 master-0 kubenswrapper[3986]: I0318 08:46:14.499669 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:14.500112 master-0 kubenswrapper[3986]: I0318 08:46:14.500085 3986 scope.go:117] "RemoveContainer" containerID="124e03a6c73c1a5f696eaaf068fed389ef768c17072ff985e986143a785ef67a" Mar 18 08:46:14.501975 master-0 kubenswrapper[3986]: I0318 08:46:14.501900 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f"} Mar 18 08:46:14.501975 master-0 kubenswrapper[3986]: I0318 08:46:14.501961 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de"} Mar 18 08:46:14.502157 master-0 kubenswrapper[3986]: I0318 08:46:14.501997 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:14.503014 master-0 kubenswrapper[3986]: I0318 08:46:14.502978 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:14.503140 master-0 kubenswrapper[3986]: I0318 08:46:14.503033 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:14.503140 master-0 kubenswrapper[3986]: I0318 08:46:14.503050 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:15.234155 master-0 kubenswrapper[3986]: I0318 08:46:15.234107 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:15.450477 master-0 kubenswrapper[3986]: E0318 08:46:15.450420 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 08:46:15.504887 master-0 kubenswrapper[3986]: I0318 08:46:15.504785 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 08:46:15.505283 master-0 kubenswrapper[3986]: I0318 08:46:15.505153 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 08:46:15.505448 master-0 kubenswrapper[3986]: I0318 08:46:15.505418 3986 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="4c9ec8f8a444aa56ff69c03f198037bf62f78ba3be4083610cf5e4f1eb191713" exitCode=1 Mar 18 08:46:15.505514 master-0 kubenswrapper[3986]: I0318 08:46:15.505501 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:15.505817 master-0 kubenswrapper[3986]: I0318 08:46:15.505802 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:15.506016 master-0 kubenswrapper[3986]: I0318 08:46:15.505995 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"4c9ec8f8a444aa56ff69c03f198037bf62f78ba3be4083610cf5e4f1eb191713"} Mar 18 08:46:15.506075 master-0 kubenswrapper[3986]: I0318 08:46:15.506047 3986 scope.go:117] "RemoveContainer" containerID="124e03a6c73c1a5f696eaaf068fed389ef768c17072ff985e986143a785ef67a" Mar 18 08:46:15.506491 master-0 kubenswrapper[3986]: I0318 08:46:15.506477 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:15.506544 master-0 kubenswrapper[3986]: I0318 08:46:15.506497 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:15.506544 master-0 kubenswrapper[3986]: I0318 08:46:15.506505 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:15.506830 master-0 kubenswrapper[3986]: I0318 08:46:15.506808 3986 scope.go:117] "RemoveContainer" containerID="4c9ec8f8a444aa56ff69c03f198037bf62f78ba3be4083610cf5e4f1eb191713" Mar 18 08:46:15.506958 master-0 kubenswrapper[3986]: E0318 08:46:15.506941 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 08:46:15.507010 master-0 kubenswrapper[3986]: I0318 08:46:15.506992 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:15.507010 master-0 kubenswrapper[3986]: I0318 08:46:15.507001 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:15.507010 master-0 kubenswrapper[3986]: I0318 08:46:15.507009 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:15.536113 master-0 kubenswrapper[3986]: I0318 08:46:15.536084 3986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 08:46:15.536958 master-0 kubenswrapper[3986]: E0318 08:46:15.536935 3986 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:15.698672 master-0 kubenswrapper[3986]: I0318 08:46:15.698611 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:15.699867 master-0 kubenswrapper[3986]: I0318 08:46:15.699815 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:15.699927 master-0 kubenswrapper[3986]: I0318 08:46:15.699885 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:15.699927 master-0 kubenswrapper[3986]: I0318 08:46:15.699902 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:15.699989 master-0 kubenswrapper[3986]: I0318 08:46:15.699953 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:15.700703 master-0 kubenswrapper[3986]: E0318 08:46:15.700666 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:16.202039 master-0 kubenswrapper[3986]: W0318 08:46:16.201950 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:16.202039 master-0 kubenswrapper[3986]: E0318 08:46:16.202009 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:16.233054 master-0 kubenswrapper[3986]: I0318 08:46:16.233021 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:16.508597 master-0 kubenswrapper[3986]: I0318 08:46:16.508172 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:16.509201 master-0 kubenswrapper[3986]: I0318 08:46:16.509165 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:16.509282 master-0 kubenswrapper[3986]: I0318 08:46:16.509230 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:16.509282 master-0 kubenswrapper[3986]: I0318 08:46:16.509254 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:16.516434 master-0 kubenswrapper[3986]: I0318 08:46:16.516384 3986 scope.go:117] "RemoveContainer" containerID="4c9ec8f8a444aa56ff69c03f198037bf62f78ba3be4083610cf5e4f1eb191713" Mar 18 08:46:16.516631 master-0 kubenswrapper[3986]: E0318 08:46:16.516595 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 08:46:16.554256 master-0 kubenswrapper[3986]: W0318 08:46:16.554095 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:16.554256 master-0 kubenswrapper[3986]: E0318 08:46:16.554210 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:17.234021 master-0 kubenswrapper[3986]: I0318 08:46:17.233965 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:17.428577 master-0 kubenswrapper[3986]: E0318 08:46:17.428354 3986 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de3235a22b7cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.229428685 +0000 UTC m=+0.636598807,LastTimestamp:2026-03-18 08:46:09.229428685 +0000 UTC m=+0.636598807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:17.798205 master-0 kubenswrapper[3986]: W0318 08:46:17.798088 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:17.798205 master-0 kubenswrapper[3986]: E0318 08:46:17.798192 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:18.234656 master-0 kubenswrapper[3986]: I0318 08:46:18.234579 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:18.515284 master-0 kubenswrapper[3986]: I0318 08:46:18.515220 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:18.515284 master-0 kubenswrapper[3986]: I0318 08:46:18.515262 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"56c1813fc6a99c6be68188fda55c9aa95683f9493caa43861ba04693d0ba89d2"} Mar 18 08:46:18.516528 master-0 kubenswrapper[3986]: I0318 08:46:18.516413 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:18.516528 master-0 kubenswrapper[3986]: I0318 08:46:18.516471 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:18.516528 master-0 kubenswrapper[3986]: I0318 08:46:18.516489 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:18.518068 master-0 kubenswrapper[3986]: I0318 08:46:18.518011 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 08:46:18.520722 master-0 kubenswrapper[3986]: I0318 08:46:18.520674 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"cae6edc05ec437bf1216d8818e262c95bff15d2f9aa2f76f2a55bc0b5ab23801"} Mar 18 08:46:18.530214 master-0 kubenswrapper[3986]: W0318 08:46:18.530147 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:18.530324 master-0 kubenswrapper[3986]: E0318 08:46:18.530230 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:19.235465 master-0 kubenswrapper[3986]: I0318 08:46:19.235400 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:19.384778 master-0 kubenswrapper[3986]: E0318 08:46:19.384732 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:19.525159 master-0 kubenswrapper[3986]: I0318 08:46:19.525081 3986 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="cae6edc05ec437bf1216d8818e262c95bff15d2f9aa2f76f2a55bc0b5ab23801" exitCode=1 Mar 18 08:46:19.525394 master-0 kubenswrapper[3986]: I0318 08:46:19.525198 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"cae6edc05ec437bf1216d8818e262c95bff15d2f9aa2f76f2a55bc0b5ab23801"} Mar 18 08:46:19.527494 master-0 kubenswrapper[3986]: I0318 08:46:19.527192 3986 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="f2d4d2d49e0c856fff93c30b0d719c8529754ea148952a7ef6bb3db593f16a16" exitCode=0 Mar 18 08:46:19.527575 master-0 kubenswrapper[3986]: I0318 08:46:19.527296 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:19.527735 master-0 kubenswrapper[3986]: I0318 08:46:19.527283 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"f2d4d2d49e0c856fff93c30b0d719c8529754ea148952a7ef6bb3db593f16a16"} Mar 18 08:46:19.527788 master-0 kubenswrapper[3986]: I0318 08:46:19.527563 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:19.528952 master-0 kubenswrapper[3986]: I0318 08:46:19.528795 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:19.528952 master-0 kubenswrapper[3986]: I0318 08:46:19.528896 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:19.528952 master-0 kubenswrapper[3986]: I0318 08:46:19.528932 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:19.529137 master-0 kubenswrapper[3986]: I0318 08:46:19.528961 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:19.529137 master-0 kubenswrapper[3986]: I0318 08:46:19.529011 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:19.529137 master-0 kubenswrapper[3986]: I0318 08:46:19.529029 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:19.532276 master-0 kubenswrapper[3986]: I0318 08:46:19.532241 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:19.533018 master-0 kubenswrapper[3986]: I0318 08:46:19.532982 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:19.533018 master-0 kubenswrapper[3986]: I0318 08:46:19.533018 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:19.533123 master-0 kubenswrapper[3986]: I0318 08:46:19.533030 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:20.532554 master-0 kubenswrapper[3986]: I0318 08:46:20.532253 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d"} Mar 18 08:46:20.532554 master-0 kubenswrapper[3986]: I0318 08:46:20.532358 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:20.533326 master-0 kubenswrapper[3986]: I0318 08:46:20.533283 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:20.533326 master-0 kubenswrapper[3986]: I0318 08:46:20.533311 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:20.533326 master-0 kubenswrapper[3986]: I0318 08:46:20.533321 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:20.533733 master-0 kubenswrapper[3986]: I0318 08:46:20.533704 3986 scope.go:117] "RemoveContainer" containerID="cae6edc05ec437bf1216d8818e262c95bff15d2f9aa2f76f2a55bc0b5ab23801" Mar 18 08:46:20.535930 master-0 kubenswrapper[3986]: I0318 08:46:20.535906 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"5ec3e7108eee8c08ca66f6f618d1955dea098f10f4832f7e925bd7f46bce001f"} Mar 18 08:46:21.257445 master-0 kubenswrapper[3986]: I0318 08:46:21.257382 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:21.541284 master-0 kubenswrapper[3986]: I0318 08:46:21.541124 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"6be6b0de4a5d0386d8a94651962cc0001d3124e6eb513e3b68435d030ea24841"} Mar 18 08:46:21.541284 master-0 kubenswrapper[3986]: I0318 08:46:21.541261 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:21.542330 master-0 kubenswrapper[3986]: I0318 08:46:21.542301 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:21.542387 master-0 kubenswrapper[3986]: I0318 08:46:21.542343 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:21.542387 master-0 kubenswrapper[3986]: I0318 08:46:21.542356 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:21.899922 master-0 kubenswrapper[3986]: E0318 08:46:21.891899 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 08:46:22.012665 master-0 kubenswrapper[3986]: I0318 08:46:22.012575 3986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:22.017354 master-0 kubenswrapper[3986]: I0318 08:46:22.017317 3986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:22.111934 master-0 kubenswrapper[3986]: I0318 08:46:22.111014 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:22.115994 master-0 kubenswrapper[3986]: I0318 08:46:22.115319 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:22.115994 master-0 kubenswrapper[3986]: I0318 08:46:22.115369 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:22.115994 master-0 kubenswrapper[3986]: I0318 08:46:22.115381 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:22.115994 master-0 kubenswrapper[3986]: I0318 08:46:22.115443 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:22.124668 master-0 kubenswrapper[3986]: E0318 08:46:22.124630 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 08:46:22.279896 master-0 kubenswrapper[3986]: I0318 08:46:22.279468 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:22.543889 master-0 kubenswrapper[3986]: I0318 08:46:22.543819 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:22.544588 master-0 kubenswrapper[3986]: I0318 08:46:22.543912 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:22.544727 master-0 kubenswrapper[3986]: I0318 08:46:22.544685 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:22.544727 master-0 kubenswrapper[3986]: I0318 08:46:22.544725 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:22.544799 master-0 kubenswrapper[3986]: I0318 08:46:22.544738 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:23.240558 master-0 kubenswrapper[3986]: I0318 08:46:23.240454 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:23.550531 master-0 kubenswrapper[3986]: I0318 08:46:23.550289 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:23.550531 master-0 kubenswrapper[3986]: I0318 08:46:23.550289 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"b0564925d47f5840821e3c795a9cfcae45b42d4975ada3f3aedc3639ab59cfb5"} Mar 18 08:46:23.550531 master-0 kubenswrapper[3986]: I0318 08:46:23.550318 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:23.551679 master-0 kubenswrapper[3986]: I0318 08:46:23.551624 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:23.551679 master-0 kubenswrapper[3986]: I0318 08:46:23.551657 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:23.551679 master-0 kubenswrapper[3986]: I0318 08:46:23.551678 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:23.551816 master-0 kubenswrapper[3986]: I0318 08:46:23.551692 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:23.551816 master-0 kubenswrapper[3986]: I0318 08:46:23.551712 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:23.551816 master-0 kubenswrapper[3986]: I0318 08:46:23.551696 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:23.666438 master-0 kubenswrapper[3986]: I0318 08:46:23.666353 3986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 08:46:23.689521 master-0 kubenswrapper[3986]: I0318 08:46:23.689421 3986 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 08:46:23.942511 master-0 kubenswrapper[3986]: I0318 08:46:23.942197 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:24.241398 master-0 kubenswrapper[3986]: I0318 08:46:24.241269 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:24.553542 master-0 kubenswrapper[3986]: I0318 08:46:24.553368 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:24.554827 master-0 kubenswrapper[3986]: I0318 08:46:24.554758 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:24.554986 master-0 kubenswrapper[3986]: I0318 08:46:24.554831 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:24.554986 master-0 kubenswrapper[3986]: I0318 08:46:24.554890 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:24.637094 master-0 kubenswrapper[3986]: W0318 08:46:24.637006 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 18 08:46:24.637094 master-0 kubenswrapper[3986]: E0318 08:46:24.637069 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:24.709496 master-0 kubenswrapper[3986]: W0318 08:46:24.709366 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 18 08:46:24.709496 master-0 kubenswrapper[3986]: E0318 08:46:24.709465 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:25.240441 master-0 kubenswrapper[3986]: I0318 08:46:25.240389 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:25.556301 master-0 kubenswrapper[3986]: I0318 08:46:25.556121 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.557422 master-0 kubenswrapper[3986]: I0318 08:46:25.557370 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.557595 master-0 kubenswrapper[3986]: I0318 08:46:25.557454 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.557595 master-0 kubenswrapper[3986]: I0318 08:46:25.557478 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.676173 master-0 kubenswrapper[3986]: I0318 08:46:25.676076 3986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:25.676415 master-0 kubenswrapper[3986]: I0318 08:46:25.676301 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.677914 master-0 kubenswrapper[3986]: I0318 08:46:25.677843 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.678310 master-0 kubenswrapper[3986]: I0318 08:46:25.678280 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.678507 master-0 kubenswrapper[3986]: I0318 08:46:25.678482 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.682623 master-0 kubenswrapper[3986]: I0318 08:46:25.682588 3986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.241202 master-0 kubenswrapper[3986]: I0318 08:46:26.241138 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:26.559276 master-0 kubenswrapper[3986]: I0318 08:46:26.559100 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:26.559276 master-0 kubenswrapper[3986]: I0318 08:46:26.559225 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.560425 master-0 kubenswrapper[3986]: I0318 08:46:26.560378 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:26.560425 master-0 kubenswrapper[3986]: I0318 08:46:26.560419 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:26.560557 master-0 kubenswrapper[3986]: I0318 08:46:26.560435 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:27.240973 master-0 kubenswrapper[3986]: I0318 08:46:27.240849 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:27.427436 master-0 kubenswrapper[3986]: I0318 08:46:27.427364 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:27.430074 master-0 kubenswrapper[3986]: I0318 08:46:27.429347 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:27.430074 master-0 kubenswrapper[3986]: I0318 08:46:27.429377 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:27.430074 master-0 kubenswrapper[3986]: I0318 08:46:27.429388 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:27.430074 master-0 kubenswrapper[3986]: I0318 08:46:27.429673 3986 scope.go:117] "RemoveContainer" containerID="4c9ec8f8a444aa56ff69c03f198037bf62f78ba3be4083610cf5e4f1eb191713" Mar 18 08:46:27.435395 master-0 kubenswrapper[3986]: E0318 08:46:27.434994 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235a22b7cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.229428685 +0000 UTC m=+0.636598807,LastTimestamp:2026-03-18 08:46:09.229428685 +0000 UTC m=+0.636598807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.466712 master-0 kubenswrapper[3986]: E0318 08:46:27.466250 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e46aeca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,LastTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.479286 master-0 kubenswrapper[3986]: E0318 08:46:27.479044 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e478680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,LastTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.487022 master-0 kubenswrapper[3986]: E0318 08:46:27.486841 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e47efb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298976696 +0000 UTC m=+0.706146828,LastTimestamp:2026-03-18 08:46:09.298976696 +0000 UTC m=+0.706146828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.494052 master-0 kubenswrapper[3986]: E0318 08:46:27.493789 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de323635d891f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.384278303 +0000 UTC m=+0.791448395,LastTimestamp:2026-03-18 08:46:09.384278303 +0000 UTC m=+0.791448395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.499534 master-0 kubenswrapper[3986]: E0318 08:46:27.499396 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e46aeca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e46aeca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,LastTimestamp:2026-03-18 08:46:09.481073562 +0000 UTC m=+0.888243674,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.504080 master-0 kubenswrapper[3986]: E0318 08:46:27.503882 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e478680\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e478680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,LastTimestamp:2026-03-18 08:46:09.481110543 +0000 UTC m=+0.888280635,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.508935 master-0 kubenswrapper[3986]: E0318 08:46:27.508802 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e47efb8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e47efb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298976696 +0000 UTC m=+0.706146828,LastTimestamp:2026-03-18 08:46:09.481122653 +0000 UTC m=+0.888292745,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.513618 master-0 kubenswrapper[3986]: E0318 08:46:27.513490 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e46aeca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e46aeca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,LastTimestamp:2026-03-18 08:46:09.529048903 +0000 UTC m=+0.936218995,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.517995 master-0 kubenswrapper[3986]: E0318 08:46:27.517909 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e478680\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e478680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,LastTimestamp:2026-03-18 08:46:09.529082614 +0000 UTC m=+0.936252706,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.531590 master-0 kubenswrapper[3986]: E0318 08:46:27.531459 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e47efb8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e47efb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298976696 +0000 UTC m=+0.706146828,LastTimestamp:2026-03-18 08:46:09.529093985 +0000 UTC m=+0.936264077,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.536695 master-0 kubenswrapper[3986]: E0318 08:46:27.536597 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e46aeca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e46aeca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,LastTimestamp:2026-03-18 08:46:09.530164187 +0000 UTC m=+0.937334279,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.541874 master-0 kubenswrapper[3986]: E0318 08:46:27.541763 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e478680\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e478680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,LastTimestamp:2026-03-18 08:46:09.530176557 +0000 UTC m=+0.937346649,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.546658 master-0 kubenswrapper[3986]: E0318 08:46:27.546508 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e47efb8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e47efb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298976696 +0000 UTC m=+0.706146828,LastTimestamp:2026-03-18 08:46:09.530188107 +0000 UTC m=+0.937358199,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.553661 master-0 kubenswrapper[3986]: E0318 08:46:27.553498 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e46aeca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e46aeca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,LastTimestamp:2026-03-18 08:46:09.530760715 +0000 UTC m=+0.937930807,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.558142 master-0 kubenswrapper[3986]: E0318 08:46:27.558046 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e478680\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e478680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,LastTimestamp:2026-03-18 08:46:09.530822907 +0000 UTC m=+0.937992999,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.561221 master-0 kubenswrapper[3986]: I0318 08:46:27.561188 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:27.561974 master-0 kubenswrapper[3986]: E0318 08:46:27.561907 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e47efb8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e47efb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298976696 +0000 UTC m=+0.706146828,LastTimestamp:2026-03-18 08:46:09.530909069 +0000 UTC m=+0.938079161,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.564236 master-0 kubenswrapper[3986]: I0318 08:46:27.564202 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:27.564236 master-0 kubenswrapper[3986]: I0318 08:46:27.564236 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:27.564354 master-0 kubenswrapper[3986]: I0318 08:46:27.564250 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:27.567618 master-0 kubenswrapper[3986]: E0318 08:46:27.567455 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e46aeca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e46aeca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,LastTimestamp:2026-03-18 08:46:09.531165197 +0000 UTC m=+0.938335319,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.573175 master-0 kubenswrapper[3986]: E0318 08:46:27.573040 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e478680\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e478680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,LastTimestamp:2026-03-18 08:46:09.531189588 +0000 UTC m=+0.938359700,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.579284 master-0 kubenswrapper[3986]: E0318 08:46:27.579173 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e47efb8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e47efb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298976696 +0000 UTC m=+0.706146828,LastTimestamp:2026-03-18 08:46:09.531211758 +0000 UTC m=+0.938381880,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.586210 master-0 kubenswrapper[3986]: E0318 08:46:27.585344 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e46aeca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e46aeca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,LastTimestamp:2026-03-18 08:46:09.531670072 +0000 UTC m=+0.938840164,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.591498 master-0 kubenswrapper[3986]: E0318 08:46:27.591382 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e478680\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e478680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,LastTimestamp:2026-03-18 08:46:09.531683072 +0000 UTC m=+0.938853164,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.597191 master-0 kubenswrapper[3986]: E0318 08:46:27.597067 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e47efb8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e47efb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298976696 +0000 UTC m=+0.706146828,LastTimestamp:2026-03-18 08:46:09.531775115 +0000 UTC m=+0.938945217,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.603709 master-0 kubenswrapper[3986]: E0318 08:46:27.603504 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e46aeca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e46aeca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.298894538 +0000 UTC m=+0.706064701,LastTimestamp:2026-03-18 08:46:09.532971921 +0000 UTC m=+0.940142043,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.609072 master-0 kubenswrapper[3986]: E0318 08:46:27.608928 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de3235e478680\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3235e478680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:09.29894976 +0000 UTC m=+0.706119882,LastTimestamp:2026-03-18 08:46:09.533003732 +0000 UTC m=+0.940173844,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.615170 master-0 kubenswrapper[3986]: E0318 08:46:27.615028 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de323ab80fa85 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:10.594560645 +0000 UTC m=+2.001730767,LastTimestamp:2026-03-18 08:46:10.594560645 +0000 UTC m=+2.001730767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.619639 master-0 kubenswrapper[3986]: E0318 08:46:27.619472 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de323ab973bc3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:10.596019139 +0000 UTC m=+2.003189251,LastTimestamp:2026-03-18 08:46:10.596019139 +0000 UTC m=+2.003189251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.629020 master-0 kubenswrapper[3986]: E0318 08:46:27.628800 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de323addbfb24 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:10.634079012 +0000 UTC m=+2.041249134,LastTimestamp:2026-03-18 08:46:10.634079012 +0000 UTC m=+2.041249134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.634762 master-0 kubenswrapper[3986]: E0318 08:46:27.634547 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de323b01d1695 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:10.671900309 +0000 UTC m=+2.079070431,LastTimestamp:2026-03-18 08:46:10.671900309 +0000 UTC m=+2.079070431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.640949 master-0 kubenswrapper[3986]: E0318 08:46:27.640815 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de323b362a4ea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:10.726790378 +0000 UTC m=+2.133960500,LastTimestamp:2026-03-18 08:46:10.726790378 +0000 UTC m=+2.133960500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.655662 master-0 kubenswrapper[3986]: E0318 08:46:27.655491 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de3240bb68399 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 1.536s (1.536s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:12.208681881 +0000 UTC m=+3.615851973,LastTimestamp:2026-03-18 08:46:12.208681881 +0000 UTC m=+3.615851973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.664265 master-0 kubenswrapper[3986]: E0318 08:46:27.664141 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32417a6528a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:12.408947338 +0000 UTC m=+3.816117420,LastTimestamp:2026-03-18 08:46:12.408947338 +0000 UTC m=+3.816117420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.671490 master-0 kubenswrapper[3986]: E0318 08:46:27.671288 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de3241895e2fe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:12.424647422 +0000 UTC m=+3.831817504,LastTimestamp:2026-03-18 08:46:12.424647422 +0000 UTC m=+3.831817504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.677069 master-0 kubenswrapper[3986]: E0318 08:46:27.676965 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de3244cce05fd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 2.706s (2.706s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.300741629 +0000 UTC m=+4.707911711,LastTimestamp:2026-03-18 08:46:13.300741629 +0000 UTC m=+4.707911711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.694553 master-0 kubenswrapper[3986]: E0318 08:46:27.694322 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32455d7d24a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.452378698 +0000 UTC m=+4.859548820,LastTimestamp:2026-03-18 08:46:13.452378698 +0000 UTC m=+4.859548820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.699212 master-0 kubenswrapper[3986]: E0318 08:46:27.699021 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de3245664c43a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.461615674 +0000 UTC m=+4.868785756,LastTimestamp:2026-03-18 08:46:13.461615674 +0000 UTC m=+4.868785756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.703379 master-0 kubenswrapper[3986]: E0318 08:46:27.703269 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de32457464d99 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.476396441 +0000 UTC m=+4.883566523,LastTimestamp:2026-03-18 08:46:13.476396441 +0000 UTC m=+4.883566523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.708395 master-0 kubenswrapper[3986]: E0318 08:46:27.708259 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de324577daa64 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.480024676 +0000 UTC m=+4.887194758,LastTimestamp:2026-03-18 08:46:13.480024676 +0000 UTC m=+4.887194758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.713445 master-0 kubenswrapper[3986]: E0318 08:46:27.713308 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32460d24f8e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.636566926 +0000 UTC m=+5.043737008,LastTimestamp:2026-03-18 08:46:13.636566926 +0000 UTC m=+5.043737008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.718945 master-0 kubenswrapper[3986]: E0318 08:46:27.718770 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32461b2b71d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.651273501 +0000 UTC m=+5.058443583,LastTimestamp:2026-03-18 08:46:13.651273501 +0000 UTC m=+5.058443583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.725007 master-0 kubenswrapper[3986]: E0318 08:46:27.724835 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de32462712185 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.663752581 +0000 UTC m=+5.070922663,LastTimestamp:2026-03-18 08:46:13.663752581 +0000 UTC m=+5.070922663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.737614 master-0 kubenswrapper[3986]: E0318 08:46:27.737436 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de324638c27cd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.682300877 +0000 UTC m=+5.089470959,LastTimestamp:2026-03-18 08:46:13.682300877 +0000 UTC m=+5.089470959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.744421 master-0 kubenswrapper[3986]: E0318 08:46:27.744244 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de32455d7d24a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32455d7d24a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.452378698 +0000 UTC m=+4.859548820,LastTimestamp:2026-03-18 08:46:14.504306521 +0000 UTC m=+5.911476603,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.746424 master-0 kubenswrapper[3986]: I0318 08:46:27.746395 3986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:27.746589 master-0 kubenswrapper[3986]: I0318 08:46:27.746566 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:27.748127 master-0 kubenswrapper[3986]: I0318 08:46:27.748082 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:27.748175 master-0 kubenswrapper[3986]: I0318 08:46:27.748136 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:27.748175 master-0 kubenswrapper[3986]: I0318 08:46:27.748151 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:27.750877 master-0 kubenswrapper[3986]: E0318 08:46:27.750727 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de32460d24f8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32460d24f8e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.636566926 +0000 UTC m=+5.043737008,LastTimestamp:2026-03-18 08:46:14.760101307 +0000 UTC m=+6.167271389,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.751164 master-0 kubenswrapper[3986]: I0318 08:46:27.751130 3986 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:27.756565 master-0 kubenswrapper[3986]: E0318 08:46:27.756395 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de32461b2b71d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32461b2b71d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.651273501 +0000 UTC m=+5.058443583,LastTimestamp:2026-03-18 08:46:14.771034472 +0000 UTC m=+6.178204554,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.762023 master-0 kubenswrapper[3986]: E0318 08:46:27.761925 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de324d04dac9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:15.50692265 +0000 UTC m=+6.914092732,LastTimestamp:2026-03-18 08:46:15.50692265 +0000 UTC m=+6.914092732,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.768363 master-0 kubenswrapper[3986]: E0318 08:46:27.768203 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de324d04dac9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de324d04dac9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:15.50692265 +0000 UTC m=+6.914092732,LastTimestamp:2026-03-18 08:46:16.516556471 +0000 UTC m=+7.923726563,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.776068 master-0 kubenswrapper[3986]: E0318 08:46:27.775978 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de32570fb7cdc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.568s (7.568s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.202668252 +0000 UTC m=+9.609838374,LastTimestamp:2026-03-18 08:46:18.202668252 +0000 UTC m=+9.609838374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.779399 master-0 kubenswrapper[3986]: E0318 08:46:27.779324 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de32572509711 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.498s (7.498s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.225022737 +0000 UTC m=+9.632192849,LastTimestamp:2026-03-18 08:46:18.225022737 +0000 UTC m=+9.632192849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.782836 master-0 kubenswrapper[3986]: E0318 08:46:27.782716 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de325775e4d62 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.713s (7.713s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.309807458 +0000 UTC m=+9.716977540,LastTimestamp:2026-03-18 08:46:18.309807458 +0000 UTC m=+9.716977540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.787626 master-0 kubenswrapper[3986]: E0318 08:46:27.787486 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de3257ef4b71c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.437105436 +0000 UTC m=+9.844275508,LastTimestamp:2026-03-18 08:46:18.437105436 +0000 UTC m=+9.844275508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.792890 master-0 kubenswrapper[3986]: E0318 08:46:27.792784 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3257ef4f1e6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.437120486 +0000 UTC m=+9.844290578,LastTimestamp:2026-03-18 08:46:18.437120486 +0000 UTC m=+9.844290578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.797261 master-0 kubenswrapper[3986]: E0318 08:46:27.796995 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de3257f8a4066 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.446905446 +0000 UTC m=+9.854075528,LastTimestamp:2026-03-18 08:46:18.446905446 +0000 UTC m=+9.854075528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.803930 master-0 kubenswrapper[3986]: E0318 08:46:27.803764 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3257f9cb3c1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.448114625 +0000 UTC m=+9.855284707,LastTimestamp:2026-03-18 08:46:18.448114625 +0000 UTC m=+9.855284707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.808934 master-0 kubenswrapper[3986]: E0318 08:46:27.808702 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3257fac1ddc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.449124828 +0000 UTC m=+9.856294910,LastTimestamp:2026-03-18 08:46:18.449124828 +0000 UTC m=+9.856294910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.816439 master-0 kubenswrapper[3986]: E0318 08:46:27.816155 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32583f36d92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.520907154 +0000 UTC m=+9.928077256,LastTimestamp:2026-03-18 08:46:18.520907154 +0000 UTC m=+9.928077256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.823775 master-0 kubenswrapper[3986]: E0318 08:46:27.823553 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32584b93490 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.533868688 +0000 UTC m=+9.941038780,LastTimestamp:2026-03-18 08:46:18.533868688 +0000 UTC m=+9.941038780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.829647 master-0 kubenswrapper[3986]: E0318 08:46:27.829453 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de325c03a0329 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:19.532165929 +0000 UTC m=+10.939336021,LastTimestamp:2026-03-18 08:46:19.532165929 +0000 UTC m=+10.939336021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.835843 master-0 kubenswrapper[3986]: E0318 08:46:27.835676 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de325cccb9f7c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:19.74303526 +0000 UTC m=+11.150205352,LastTimestamp:2026-03-18 08:46:19.74303526 +0000 UTC m=+11.150205352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.840722 master-0 kubenswrapper[3986]: E0318 08:46:27.840560 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de325cd9174f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:19.756000505 +0000 UTC m=+11.163170597,LastTimestamp:2026-03-18 08:46:19.756000505 +0000 UTC m=+11.163170597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.844775 master-0 kubenswrapper[3986]: E0318 08:46:27.844651 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de325cd9f6af6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:19.756915446 +0000 UTC m=+11.164085548,LastTimestamp:2026-03-18 08:46:19.756915446 +0000 UTC m=+11.164085548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.848476 master-0 kubenswrapper[3986]: E0318 08:46:27.848322 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de325e7de9748 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 1.748s (1.748s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:20.197263176 +0000 UTC m=+11.604433258,LastTimestamp:2026-03-18 08:46:20.197263176 +0000 UTC m=+11.604433258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.852383 master-0 kubenswrapper[3986]: E0318 08:46:27.852275 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de325f5807a24 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:20.425976356 +0000 UTC m=+11.833146438,LastTimestamp:2026-03-18 08:46:20.425976356 +0000 UTC m=+11.833146438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.856832 master-0 kubenswrapper[3986]: E0318 08:46:27.856729 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de325f63ba2f2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:20.438242034 +0000 UTC m=+11.845412166,LastTimestamp:2026-03-18 08:46:20.438242034 +0000 UTC m=+11.845412166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.861242 master-0 kubenswrapper[3986]: E0318 08:46:27.861157 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de325fc1430a7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:20.536320167 +0000 UTC m=+11.943490249,LastTimestamp:2026-03-18 08:46:20.536320167 +0000 UTC m=+11.943490249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.865458 master-0 kubenswrapper[3986]: E0318 08:46:27.865247 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189de3257ef4f1e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3257ef4f1e6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.437120486 +0000 UTC m=+9.844290578,LastTimestamp:2026-03-18 08:46:20.722445627 +0000 UTC m=+12.129615709,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.872759 master-0 kubenswrapper[3986]: E0318 08:46:27.872597 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189de3257f9cb3c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3257f9cb3c1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:18.448114625 +0000 UTC m=+9.855284707,LastTimestamp:2026-03-18 08:46:20.732996615 +0000 UTC m=+12.140166697,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.880475 master-0 kubenswrapper[3986]: W0318 08:46:27.880422 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 18 08:46:27.880549 master-0 kubenswrapper[3986]: E0318 08:46:27.880481 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:27.880549 master-0 kubenswrapper[3986]: E0318 08:46:27.880393 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32676de49fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 2.839s (2.839s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:22.596385278 +0000 UTC m=+14.003555400,LastTimestamp:2026-03-18 08:46:22.596385278 +0000 UTC m=+14.003555400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.884995 master-0 kubenswrapper[3986]: E0318 08:46:27.884887 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32684225fd4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:22.818951124 +0000 UTC m=+14.226121236,LastTimestamp:2026-03-18 08:46:22.818951124 +0000 UTC m=+14.226121236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.889940 master-0 kubenswrapper[3986]: E0318 08:46:27.888652 3986 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32684d2bd17 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:22.830509335 +0000 UTC m=+14.237679447,LastTimestamp:2026-03-18 08:46:22.830509335 +0000 UTC m=+14.237679447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.895728 master-0 kubenswrapper[3986]: E0318 08:46:27.895627 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de32455d7d24a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32455d7d24a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.452378698 +0000 UTC m=+4.859548820,LastTimestamp:2026-03-18 08:46:27.433768942 +0000 UTC m=+18.840939024,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.899768 master-0 kubenswrapper[3986]: E0318 08:46:27.899675 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de32460d24f8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32460d24f8e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.636566926 +0000 UTC m=+5.043737008,LastTimestamp:2026-03-18 08:46:27.668812781 +0000 UTC m=+19.075982863,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.906476 master-0 kubenswrapper[3986]: E0318 08:46:27.906387 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de32461b2b71d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32461b2b71d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:13.651273501 +0000 UTC m=+5.058443583,LastTimestamp:2026-03-18 08:46:27.687846287 +0000 UTC m=+19.095016399,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:28.239412 master-0 kubenswrapper[3986]: I0318 08:46:28.239344 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:28.567592 master-0 kubenswrapper[3986]: I0318 08:46:28.567494 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 08:46:28.568932 master-0 kubenswrapper[3986]: I0318 08:46:28.568112 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 08:46:28.568932 master-0 kubenswrapper[3986]: I0318 08:46:28.568611 3986 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc" exitCode=1 Mar 18 08:46:28.568932 master-0 kubenswrapper[3986]: I0318 08:46:28.568801 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:28.569981 master-0 kubenswrapper[3986]: I0318 08:46:28.569927 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc"} Mar 18 08:46:28.570081 master-0 kubenswrapper[3986]: I0318 08:46:28.569994 3986 scope.go:117] "RemoveContainer" containerID="4c9ec8f8a444aa56ff69c03f198037bf62f78ba3be4083610cf5e4f1eb191713" Mar 18 08:46:28.570183 master-0 kubenswrapper[3986]: I0318 08:46:28.570152 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:28.570882 master-0 kubenswrapper[3986]: I0318 08:46:28.570798 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:28.570882 master-0 kubenswrapper[3986]: I0318 08:46:28.570872 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:28.571053 master-0 kubenswrapper[3986]: I0318 08:46:28.570892 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:28.571561 master-0 kubenswrapper[3986]: I0318 08:46:28.571488 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:28.571647 master-0 kubenswrapper[3986]: I0318 08:46:28.571583 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:28.571647 master-0 kubenswrapper[3986]: I0318 08:46:28.571616 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:28.572657 master-0 kubenswrapper[3986]: I0318 08:46:28.572238 3986 scope.go:117] "RemoveContainer" containerID="65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc" Mar 18 08:46:28.572657 master-0 kubenswrapper[3986]: E0318 08:46:28.572554 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 08:46:28.576484 master-0 kubenswrapper[3986]: I0318 08:46:28.576412 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:28.580594 master-0 kubenswrapper[3986]: E0318 08:46:28.580380 3986 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de324d04dac9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de324d04dac9a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:15.50692265 +0000 UTC m=+6.914092732,LastTimestamp:2026-03-18 08:46:28.572483589 +0000 UTC m=+19.979653721,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:28.902133 master-0 kubenswrapper[3986]: E0318 08:46:28.901905 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 08:46:29.125659 master-0 kubenswrapper[3986]: I0318 08:46:29.125568 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:29.127380 master-0 kubenswrapper[3986]: I0318 08:46:29.127325 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:29.127492 master-0 kubenswrapper[3986]: I0318 08:46:29.127387 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:29.127492 master-0 kubenswrapper[3986]: I0318 08:46:29.127409 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:29.127601 master-0 kubenswrapper[3986]: I0318 08:46:29.127496 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:29.136050 master-0 kubenswrapper[3986]: E0318 08:46:29.135964 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 08:46:29.244218 master-0 kubenswrapper[3986]: I0318 08:46:29.244109 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:29.306210 master-0 kubenswrapper[3986]: I0318 08:46:29.306098 3986 csr.go:261] certificate signing request csr-fhmk4 is approved, waiting to be issued Mar 18 08:46:29.346822 master-0 kubenswrapper[3986]: W0318 08:46:29.346689 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:29.346822 master-0 kubenswrapper[3986]: E0318 08:46:29.346801 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:29.385205 master-0 kubenswrapper[3986]: E0318 08:46:29.385127 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:29.575135 master-0 kubenswrapper[3986]: I0318 08:46:29.574903 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 08:46:29.576248 master-0 kubenswrapper[3986]: I0318 08:46:29.576072 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:29.577430 master-0 kubenswrapper[3986]: I0318 08:46:29.577369 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:29.577430 master-0 kubenswrapper[3986]: I0318 08:46:29.577430 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:29.577645 master-0 kubenswrapper[3986]: I0318 08:46:29.577450 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:30.242938 master-0 kubenswrapper[3986]: I0318 08:46:30.242819 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:31.242664 master-0 kubenswrapper[3986]: I0318 08:46:31.242487 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:32.241166 master-0 kubenswrapper[3986]: I0318 08:46:32.241033 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:33.240520 master-0 kubenswrapper[3986]: I0318 08:46:33.240443 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:34.242027 master-0 kubenswrapper[3986]: I0318 08:46:34.241934 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:35.241591 master-0 kubenswrapper[3986]: I0318 08:46:35.241446 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:35.911030 master-0 kubenswrapper[3986]: E0318 08:46:35.910917 3986 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 08:46:36.035369 master-0 kubenswrapper[3986]: I0318 08:46:36.035256 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:36.035655 master-0 kubenswrapper[3986]: I0318 08:46:36.035476 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:36.037080 master-0 kubenswrapper[3986]: I0318 08:46:36.036964 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:36.037080 master-0 kubenswrapper[3986]: I0318 08:46:36.037044 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:36.037080 master-0 kubenswrapper[3986]: I0318 08:46:36.037067 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:36.041722 master-0 kubenswrapper[3986]: I0318 08:46:36.041665 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:36.136971 master-0 kubenswrapper[3986]: I0318 08:46:36.136823 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:36.138793 master-0 kubenswrapper[3986]: I0318 08:46:36.138695 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:36.138793 master-0 kubenswrapper[3986]: I0318 08:46:36.138750 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:36.138793 master-0 kubenswrapper[3986]: I0318 08:46:36.138767 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:36.139252 master-0 kubenswrapper[3986]: I0318 08:46:36.138845 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:36.147994 master-0 kubenswrapper[3986]: E0318 08:46:36.147916 3986 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 08:46:36.239998 master-0 kubenswrapper[3986]: I0318 08:46:36.239945 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:36.594935 master-0 kubenswrapper[3986]: I0318 08:46:36.594760 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:36.595921 master-0 kubenswrapper[3986]: I0318 08:46:36.595834 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:36.595921 master-0 kubenswrapper[3986]: I0318 08:46:36.595919 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:36.595921 master-0 kubenswrapper[3986]: I0318 08:46:36.595936 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:37.241483 master-0 kubenswrapper[3986]: I0318 08:46:37.241411 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:38.241444 master-0 kubenswrapper[3986]: I0318 08:46:38.241327 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:39.239527 master-0 kubenswrapper[3986]: I0318 08:46:39.239455 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:39.385444 master-0 kubenswrapper[3986]: E0318 08:46:39.385320 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:40.239978 master-0 kubenswrapper[3986]: I0318 08:46:40.239916 3986 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:40.443697 master-0 kubenswrapper[3986]: W0318 08:46:40.443608 3986 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 18 08:46:40.444097 master-0 kubenswrapper[3986]: E0318 08:46:40.443702 3986 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:40.641479 master-0 kubenswrapper[3986]: I0318 08:46:40.641347 3986 csr.go:257] certificate signing request csr-fhmk4 is issued Mar 18 08:46:41.100399 master-0 kubenswrapper[3986]: I0318 08:46:41.100289 3986 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 18 08:46:41.245760 master-0 kubenswrapper[3986]: I0318 08:46:41.245690 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:41.262004 master-0 kubenswrapper[3986]: I0318 08:46:41.261949 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:41.321646 master-0 kubenswrapper[3986]: I0318 08:46:41.321351 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:41.580558 master-0 kubenswrapper[3986]: I0318 08:46:41.580507 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:41.580558 master-0 kubenswrapper[3986]: E0318 08:46:41.580565 3986 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 08:46:41.607128 master-0 kubenswrapper[3986]: I0318 08:46:41.607059 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:41.624679 master-0 kubenswrapper[3986]: I0318 08:46:41.624615 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:41.648409 master-0 kubenswrapper[3986]: I0318 08:46:41.648291 3986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 08:38:09 +0000 UTC, rotation deadline is 2026-03-19 04:35:43.481289632 +0000 UTC Mar 18 08:46:41.648409 master-0 kubenswrapper[3986]: I0318 08:46:41.648360 3986 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h49m1.832937507s for next certificate rotation Mar 18 08:46:41.686275 master-0 kubenswrapper[3986]: I0318 08:46:41.686183 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:41.963586 master-0 kubenswrapper[3986]: I0318 08:46:41.963451 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:41.963586 master-0 kubenswrapper[3986]: E0318 08:46:41.963493 3986 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 08:46:42.062704 master-0 kubenswrapper[3986]: I0318 08:46:42.062645 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:42.077845 master-0 kubenswrapper[3986]: I0318 08:46:42.077790 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:42.133661 master-0 kubenswrapper[3986]: I0318 08:46:42.133594 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:42.397784 master-0 kubenswrapper[3986]: I0318 08:46:42.397714 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:42.397784 master-0 kubenswrapper[3986]: E0318 08:46:42.397757 3986 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 08:46:42.427286 master-0 kubenswrapper[3986]: I0318 08:46:42.427177 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:42.428513 master-0 kubenswrapper[3986]: I0318 08:46:42.428426 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:42.428513 master-0 kubenswrapper[3986]: I0318 08:46:42.428502 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:42.428798 master-0 kubenswrapper[3986]: I0318 08:46:42.428526 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:42.429234 master-0 kubenswrapper[3986]: I0318 08:46:42.429177 3986 scope.go:117] "RemoveContainer" containerID="65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc" Mar 18 08:46:42.429537 master-0 kubenswrapper[3986]: E0318 08:46:42.429487 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 08:46:42.918209 master-0 kubenswrapper[3986]: E0318 08:46:42.918133 3986 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 18 08:46:42.965721 master-0 kubenswrapper[3986]: I0318 08:46:42.965640 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:42.979775 master-0 kubenswrapper[3986]: I0318 08:46:42.979669 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:43.034548 master-0 kubenswrapper[3986]: I0318 08:46:43.034492 3986 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:43.149111 master-0 kubenswrapper[3986]: I0318 08:46:43.149048 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:43.150644 master-0 kubenswrapper[3986]: I0318 08:46:43.150598 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:43.150775 master-0 kubenswrapper[3986]: I0318 08:46:43.150663 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:43.150775 master-0 kubenswrapper[3986]: I0318 08:46:43.150688 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:43.150775 master-0 kubenswrapper[3986]: I0318 08:46:43.150766 3986 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:43.162190 master-0 kubenswrapper[3986]: I0318 08:46:43.162127 3986 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 08:46:43.162335 master-0 kubenswrapper[3986]: E0318 08:46:43.162189 3986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 08:46:43.174243 master-0 kubenswrapper[3986]: E0318 08:46:43.174112 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:43.261846 master-0 kubenswrapper[3986]: I0318 08:46:43.261773 3986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 18 08:46:43.273794 master-0 kubenswrapper[3986]: I0318 08:46:43.273706 3986 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 08:46:43.274797 master-0 kubenswrapper[3986]: E0318 08:46:43.274718 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:43.375500 master-0 kubenswrapper[3986]: E0318 08:46:43.375431 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:43.476616 master-0 kubenswrapper[3986]: E0318 08:46:43.476468 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:43.576757 master-0 kubenswrapper[3986]: E0318 08:46:43.576647 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:43.677993 master-0 kubenswrapper[3986]: E0318 08:46:43.677903 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:43.779085 master-0 kubenswrapper[3986]: E0318 08:46:43.778993 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:43.879945 master-0 kubenswrapper[3986]: E0318 08:46:43.879820 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:43.980602 master-0 kubenswrapper[3986]: E0318 08:46:43.980497 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.081735 master-0 kubenswrapper[3986]: E0318 08:46:44.081601 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.182111 master-0 kubenswrapper[3986]: E0318 08:46:44.182042 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.283001 master-0 kubenswrapper[3986]: E0318 08:46:44.282946 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.383943 master-0 kubenswrapper[3986]: E0318 08:46:44.383803 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.484034 master-0 kubenswrapper[3986]: E0318 08:46:44.483951 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.584190 master-0 kubenswrapper[3986]: E0318 08:46:44.584087 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.623663 master-0 kubenswrapper[3986]: I0318 08:46:44.623588 3986 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 08:46:44.685062 master-0 kubenswrapper[3986]: E0318 08:46:44.684910 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.785680 master-0 kubenswrapper[3986]: E0318 08:46:44.785607 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.886911 master-0 kubenswrapper[3986]: E0318 08:46:44.886818 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:44.988073 master-0 kubenswrapper[3986]: E0318 08:46:44.987981 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.089011 master-0 kubenswrapper[3986]: E0318 08:46:45.088917 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.189485 master-0 kubenswrapper[3986]: E0318 08:46:45.189388 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.290216 master-0 kubenswrapper[3986]: E0318 08:46:45.290045 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.391052 master-0 kubenswrapper[3986]: E0318 08:46:45.391003 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.492158 master-0 kubenswrapper[3986]: E0318 08:46:45.492081 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.592470 master-0 kubenswrapper[3986]: E0318 08:46:45.592300 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.692503 master-0 kubenswrapper[3986]: E0318 08:46:45.692409 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.792958 master-0 kubenswrapper[3986]: E0318 08:46:45.792818 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.893232 master-0 kubenswrapper[3986]: E0318 08:46:45.893005 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:45.994307 master-0 kubenswrapper[3986]: E0318 08:46:45.994192 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.095378 master-0 kubenswrapper[3986]: E0318 08:46:46.095278 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.196500 master-0 kubenswrapper[3986]: E0318 08:46:46.196386 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.296687 master-0 kubenswrapper[3986]: E0318 08:46:46.296587 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.397652 master-0 kubenswrapper[3986]: E0318 08:46:46.397559 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.497818 master-0 kubenswrapper[3986]: E0318 08:46:46.497732 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.598736 master-0 kubenswrapper[3986]: E0318 08:46:46.598641 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.699528 master-0 kubenswrapper[3986]: E0318 08:46:46.699421 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.800671 master-0 kubenswrapper[3986]: E0318 08:46:46.800516 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:46.901483 master-0 kubenswrapper[3986]: E0318 08:46:46.901400 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.002687 master-0 kubenswrapper[3986]: E0318 08:46:47.002606 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.103943 master-0 kubenswrapper[3986]: E0318 08:46:47.103712 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.197126 master-0 kubenswrapper[3986]: I0318 08:46:47.197036 3986 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 08:46:47.204537 master-0 kubenswrapper[3986]: E0318 08:46:47.204472 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.305421 master-0 kubenswrapper[3986]: E0318 08:46:47.305325 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.406484 master-0 kubenswrapper[3986]: E0318 08:46:47.406314 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.506811 master-0 kubenswrapper[3986]: E0318 08:46:47.506729 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.607039 master-0 kubenswrapper[3986]: E0318 08:46:47.606946 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.688455 master-0 kubenswrapper[3986]: I0318 08:46:47.688300 3986 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 08:46:47.707575 master-0 kubenswrapper[3986]: E0318 08:46:47.707499 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.808324 master-0 kubenswrapper[3986]: E0318 08:46:47.808239 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:47.908559 master-0 kubenswrapper[3986]: E0318 08:46:47.908441 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.009519 master-0 kubenswrapper[3986]: E0318 08:46:48.009396 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.110730 master-0 kubenswrapper[3986]: E0318 08:46:48.110568 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.211678 master-0 kubenswrapper[3986]: E0318 08:46:48.211587 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.312363 master-0 kubenswrapper[3986]: E0318 08:46:48.312170 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.413460 master-0 kubenswrapper[3986]: E0318 08:46:48.413356 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.513652 master-0 kubenswrapper[3986]: E0318 08:46:48.513551 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.614390 master-0 kubenswrapper[3986]: E0318 08:46:48.614239 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.715333 master-0 kubenswrapper[3986]: E0318 08:46:48.715236 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.815812 master-0 kubenswrapper[3986]: E0318 08:46:48.815747 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:48.916784 master-0 kubenswrapper[3986]: E0318 08:46:48.916640 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.017634 master-0 kubenswrapper[3986]: E0318 08:46:49.017574 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.117793 master-0 kubenswrapper[3986]: E0318 08:46:49.117701 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.218514 master-0 kubenswrapper[3986]: E0318 08:46:49.218292 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.318620 master-0 kubenswrapper[3986]: E0318 08:46:49.318499 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.386460 master-0 kubenswrapper[3986]: E0318 08:46:49.386299 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:49.418722 master-0 kubenswrapper[3986]: E0318 08:46:49.418640 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.519276 master-0 kubenswrapper[3986]: E0318 08:46:49.519160 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.620129 master-0 kubenswrapper[3986]: E0318 08:46:49.620058 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.720407 master-0 kubenswrapper[3986]: E0318 08:46:49.720326 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.821538 master-0 kubenswrapper[3986]: E0318 08:46:49.821388 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:49.922564 master-0 kubenswrapper[3986]: E0318 08:46:49.922462 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.023512 master-0 kubenswrapper[3986]: E0318 08:46:50.023443 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.124873 master-0 kubenswrapper[3986]: E0318 08:46:50.124622 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.168158 master-0 kubenswrapper[3986]: I0318 08:46:50.167635 3986 csr.go:261] certificate signing request csr-swmmx is approved, waiting to be issued Mar 18 08:46:50.176232 master-0 kubenswrapper[3986]: I0318 08:46:50.176179 3986 csr.go:257] certificate signing request csr-swmmx is issued Mar 18 08:46:50.225768 master-0 kubenswrapper[3986]: E0318 08:46:50.225684 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.326011 master-0 kubenswrapper[3986]: E0318 08:46:50.325914 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.427141 master-0 kubenswrapper[3986]: E0318 08:46:50.426986 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.527413 master-0 kubenswrapper[3986]: E0318 08:46:50.527327 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.627570 master-0 kubenswrapper[3986]: E0318 08:46:50.627467 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.728108 master-0 kubenswrapper[3986]: E0318 08:46:50.727934 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.828509 master-0 kubenswrapper[3986]: E0318 08:46:50.828421 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:50.928695 master-0 kubenswrapper[3986]: E0318 08:46:50.928587 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.029791 master-0 kubenswrapper[3986]: E0318 08:46:51.029666 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.130383 master-0 kubenswrapper[3986]: E0318 08:46:51.130261 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.177782 master-0 kubenswrapper[3986]: I0318 08:46:51.177667 3986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 08:38:09 +0000 UTC, rotation deadline is 2026-03-19 01:59:20.234646907 +0000 UTC Mar 18 08:46:51.177782 master-0 kubenswrapper[3986]: I0318 08:46:51.177721 3986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h12m29.05693009s for next certificate rotation Mar 18 08:46:51.230896 master-0 kubenswrapper[3986]: E0318 08:46:51.230774 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.331628 master-0 kubenswrapper[3986]: E0318 08:46:51.331437 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.432124 master-0 kubenswrapper[3986]: E0318 08:46:51.431976 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.532338 master-0 kubenswrapper[3986]: E0318 08:46:51.532191 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.632533 master-0 kubenswrapper[3986]: E0318 08:46:51.632374 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.733392 master-0 kubenswrapper[3986]: E0318 08:46:51.733309 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.834349 master-0 kubenswrapper[3986]: E0318 08:46:51.834270 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:51.935057 master-0 kubenswrapper[3986]: E0318 08:46:51.934900 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.035565 master-0 kubenswrapper[3986]: E0318 08:46:52.035461 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.136037 master-0 kubenswrapper[3986]: E0318 08:46:52.135930 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.177981 master-0 kubenswrapper[3986]: I0318 08:46:52.177876 3986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 08:38:09 +0000 UTC, rotation deadline is 2026-03-19 05:11:38.715833799 +0000 UTC Mar 18 08:46:52.177981 master-0 kubenswrapper[3986]: I0318 08:46:52.177948 3986 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h24m46.537891305s for next certificate rotation Mar 18 08:46:52.237033 master-0 kubenswrapper[3986]: E0318 08:46:52.236934 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.337572 master-0 kubenswrapper[3986]: E0318 08:46:52.337492 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.437784 master-0 kubenswrapper[3986]: E0318 08:46:52.437676 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.538426 master-0 kubenswrapper[3986]: E0318 08:46:52.538257 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.638825 master-0 kubenswrapper[3986]: E0318 08:46:52.638730 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.739026 master-0 kubenswrapper[3986]: E0318 08:46:52.738928 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.839268 master-0 kubenswrapper[3986]: E0318 08:46:52.839126 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:52.940016 master-0 kubenswrapper[3986]: E0318 08:46:52.939947 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.040525 master-0 kubenswrapper[3986]: E0318 08:46:53.040456 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.140775 master-0 kubenswrapper[3986]: E0318 08:46:53.140597 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.212029 master-0 kubenswrapper[3986]: E0318 08:46:53.211938 3986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 08:46:53.241092 master-0 kubenswrapper[3986]: E0318 08:46:53.240983 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.342010 master-0 kubenswrapper[3986]: E0318 08:46:53.341903 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.427319 master-0 kubenswrapper[3986]: I0318 08:46:53.427159 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:53.428517 master-0 kubenswrapper[3986]: I0318 08:46:53.428458 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:53.428517 master-0 kubenswrapper[3986]: I0318 08:46:53.428521 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:53.428808 master-0 kubenswrapper[3986]: I0318 08:46:53.428540 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:53.429178 master-0 kubenswrapper[3986]: I0318 08:46:53.429123 3986 scope.go:117] "RemoveContainer" containerID="65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc" Mar 18 08:46:53.442389 master-0 kubenswrapper[3986]: E0318 08:46:53.442332 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.543003 master-0 kubenswrapper[3986]: E0318 08:46:53.542914 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.643813 master-0 kubenswrapper[3986]: E0318 08:46:53.643740 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.744604 master-0 kubenswrapper[3986]: E0318 08:46:53.744529 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.844753 master-0 kubenswrapper[3986]: E0318 08:46:53.844652 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:53.945643 master-0 kubenswrapper[3986]: E0318 08:46:53.945481 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.045745 master-0 kubenswrapper[3986]: E0318 08:46:54.045657 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.146041 master-0 kubenswrapper[3986]: E0318 08:46:54.145934 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.246179 master-0 kubenswrapper[3986]: E0318 08:46:54.246080 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.346601 master-0 kubenswrapper[3986]: E0318 08:46:54.346485 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.446811 master-0 kubenswrapper[3986]: E0318 08:46:54.446696 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.548079 master-0 kubenswrapper[3986]: E0318 08:46:54.547837 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.643311 master-0 kubenswrapper[3986]: I0318 08:46:54.643211 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 08:46:54.643998 master-0 kubenswrapper[3986]: I0318 08:46:54.643934 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"128a5d65976993628d981fee7385d5588c74fc7f9ab0a6e9bb3f72584d42ed3d"} Mar 18 08:46:54.644115 master-0 kubenswrapper[3986]: I0318 08:46:54.644092 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:54.645175 master-0 kubenswrapper[3986]: I0318 08:46:54.645119 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:54.645175 master-0 kubenswrapper[3986]: I0318 08:46:54.645169 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:54.645388 master-0 kubenswrapper[3986]: I0318 08:46:54.645187 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:54.648832 master-0 kubenswrapper[3986]: E0318 08:46:54.648758 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.749550 master-0 kubenswrapper[3986]: E0318 08:46:54.749477 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.850322 master-0 kubenswrapper[3986]: E0318 08:46:54.850204 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:54.950417 master-0 kubenswrapper[3986]: E0318 08:46:54.950313 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.050647 master-0 kubenswrapper[3986]: E0318 08:46:55.050526 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.151224 master-0 kubenswrapper[3986]: E0318 08:46:55.151017 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.251761 master-0 kubenswrapper[3986]: E0318 08:46:55.251662 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.352894 master-0 kubenswrapper[3986]: E0318 08:46:55.352778 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.454150 master-0 kubenswrapper[3986]: E0318 08:46:55.453950 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.554491 master-0 kubenswrapper[3986]: E0318 08:46:55.554388 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.655184 master-0 kubenswrapper[3986]: E0318 08:46:55.655072 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.756376 master-0 kubenswrapper[3986]: E0318 08:46:55.756295 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.857408 master-0 kubenswrapper[3986]: E0318 08:46:55.857302 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:55.958407 master-0 kubenswrapper[3986]: E0318 08:46:55.958296 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.059177 master-0 kubenswrapper[3986]: E0318 08:46:56.059011 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.159447 master-0 kubenswrapper[3986]: E0318 08:46:56.159355 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.259905 master-0 kubenswrapper[3986]: E0318 08:46:56.259773 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.360989 master-0 kubenswrapper[3986]: E0318 08:46:56.360774 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.461107 master-0 kubenswrapper[3986]: E0318 08:46:56.461022 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.561816 master-0 kubenswrapper[3986]: E0318 08:46:56.561732 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.662314 master-0 kubenswrapper[3986]: E0318 08:46:56.662151 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.763448 master-0 kubenswrapper[3986]: E0318 08:46:56.763348 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.864625 master-0 kubenswrapper[3986]: E0318 08:46:56.864540 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:56.966017 master-0 kubenswrapper[3986]: E0318 08:46:56.965769 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.067023 master-0 kubenswrapper[3986]: E0318 08:46:57.066916 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.168167 master-0 kubenswrapper[3986]: E0318 08:46:57.168078 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.268507 master-0 kubenswrapper[3986]: E0318 08:46:57.268401 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.369152 master-0 kubenswrapper[3986]: E0318 08:46:57.369072 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.469912 master-0 kubenswrapper[3986]: E0318 08:46:57.469795 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.570722 master-0 kubenswrapper[3986]: E0318 08:46:57.570550 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.671489 master-0 kubenswrapper[3986]: E0318 08:46:57.671364 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.772286 master-0 kubenswrapper[3986]: E0318 08:46:57.772175 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.873479 master-0 kubenswrapper[3986]: E0318 08:46:57.873282 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:57.974501 master-0 kubenswrapper[3986]: E0318 08:46:57.974405 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.075359 master-0 kubenswrapper[3986]: E0318 08:46:58.075267 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.175695 master-0 kubenswrapper[3986]: E0318 08:46:58.175514 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.276122 master-0 kubenswrapper[3986]: E0318 08:46:58.275975 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.376830 master-0 kubenswrapper[3986]: E0318 08:46:58.376695 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.477378 master-0 kubenswrapper[3986]: E0318 08:46:58.477101 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.578170 master-0 kubenswrapper[3986]: E0318 08:46:58.578068 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.678320 master-0 kubenswrapper[3986]: E0318 08:46:58.678200 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.778679 master-0 kubenswrapper[3986]: E0318 08:46:58.778558 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.879180 master-0 kubenswrapper[3986]: E0318 08:46:58.879050 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:58.980398 master-0 kubenswrapper[3986]: E0318 08:46:58.980279 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.080920 master-0 kubenswrapper[3986]: E0318 08:46:59.080735 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.181167 master-0 kubenswrapper[3986]: E0318 08:46:59.181075 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.281460 master-0 kubenswrapper[3986]: E0318 08:46:59.281351 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.382181 master-0 kubenswrapper[3986]: E0318 08:46:59.381965 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.387408 master-0 kubenswrapper[3986]: E0318 08:46:59.387339 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:59.482591 master-0 kubenswrapper[3986]: E0318 08:46:59.482451 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.583693 master-0 kubenswrapper[3986]: E0318 08:46:59.583572 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.684366 master-0 kubenswrapper[3986]: E0318 08:46:59.684164 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.785045 master-0 kubenswrapper[3986]: E0318 08:46:59.784966 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.885801 master-0 kubenswrapper[3986]: E0318 08:46:59.885678 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.986553 master-0 kubenswrapper[3986]: E0318 08:46:59.986470 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.087497 master-0 kubenswrapper[3986]: E0318 08:47:00.087337 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.188007 master-0 kubenswrapper[3986]: E0318 08:47:00.187895 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.288519 master-0 kubenswrapper[3986]: E0318 08:47:00.288316 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.389489 master-0 kubenswrapper[3986]: E0318 08:47:00.389403 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.490373 master-0 kubenswrapper[3986]: E0318 08:47:00.490305 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.590911 master-0 kubenswrapper[3986]: E0318 08:47:00.590641 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.691967 master-0 kubenswrapper[3986]: E0318 08:47:00.691884 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.792472 master-0 kubenswrapper[3986]: E0318 08:47:00.792316 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.892733 master-0 kubenswrapper[3986]: E0318 08:47:00.892511 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.993800 master-0 kubenswrapper[3986]: E0318 08:47:00.993704 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.094696 master-0 kubenswrapper[3986]: E0318 08:47:01.094617 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.195442 master-0 kubenswrapper[3986]: E0318 08:47:01.195252 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.296028 master-0 kubenswrapper[3986]: E0318 08:47:01.295932 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.397170 master-0 kubenswrapper[3986]: E0318 08:47:01.397069 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.497821 master-0 kubenswrapper[3986]: E0318 08:47:01.497731 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.598408 master-0 kubenswrapper[3986]: E0318 08:47:01.598286 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.698905 master-0 kubenswrapper[3986]: E0318 08:47:01.698752 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.799147 master-0 kubenswrapper[3986]: E0318 08:47:01.798953 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.900152 master-0 kubenswrapper[3986]: E0318 08:47:01.900077 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.001367 master-0 kubenswrapper[3986]: E0318 08:47:02.001275 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.102589 master-0 kubenswrapper[3986]: E0318 08:47:02.102336 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.203333 master-0 kubenswrapper[3986]: E0318 08:47:02.203225 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.303961 master-0 kubenswrapper[3986]: E0318 08:47:02.303835 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.405046 master-0 kubenswrapper[3986]: E0318 08:47:02.404828 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.505355 master-0 kubenswrapper[3986]: E0318 08:47:02.505264 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.606491 master-0 kubenswrapper[3986]: E0318 08:47:02.606428 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.707183 master-0 kubenswrapper[3986]: E0318 08:47:02.707017 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.807662 master-0 kubenswrapper[3986]: E0318 08:47:02.807591 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:02.908577 master-0 kubenswrapper[3986]: E0318 08:47:02.908502 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.009141 master-0 kubenswrapper[3986]: E0318 08:47:03.009062 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.110380 master-0 kubenswrapper[3986]: E0318 08:47:03.110288 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.210889 master-0 kubenswrapper[3986]: E0318 08:47:03.210797 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.311331 master-0 kubenswrapper[3986]: E0318 08:47:03.311182 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.412253 master-0 kubenswrapper[3986]: E0318 08:47:03.412173 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.513391 master-0 kubenswrapper[3986]: E0318 08:47:03.513308 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.614001 master-0 kubenswrapper[3986]: E0318 08:47:03.613812 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.624117 master-0 kubenswrapper[3986]: E0318 08:47:03.624050 3986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 08:47:03.714764 master-0 kubenswrapper[3986]: E0318 08:47:03.714699 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.814970 master-0 kubenswrapper[3986]: E0318 08:47:03.814840 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:03.916203 master-0 kubenswrapper[3986]: E0318 08:47:03.916017 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.017158 master-0 kubenswrapper[3986]: E0318 08:47:04.017061 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.132177 master-0 kubenswrapper[3986]: E0318 08:47:04.131985 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.233154 master-0 kubenswrapper[3986]: E0318 08:47:04.232915 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.334169 master-0 kubenswrapper[3986]: E0318 08:47:04.334105 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.434381 master-0 kubenswrapper[3986]: E0318 08:47:04.434304 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.534686 master-0 kubenswrapper[3986]: E0318 08:47:04.534625 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.635106 master-0 kubenswrapper[3986]: E0318 08:47:04.635035 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.735681 master-0 kubenswrapper[3986]: E0318 08:47:04.735618 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.836101 master-0 kubenswrapper[3986]: E0318 08:47:04.835931 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:04.936950 master-0 kubenswrapper[3986]: E0318 08:47:04.936838 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.037735 master-0 kubenswrapper[3986]: E0318 08:47:05.037653 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.138734 master-0 kubenswrapper[3986]: E0318 08:47:05.138563 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.239172 master-0 kubenswrapper[3986]: E0318 08:47:05.239106 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.339384 master-0 kubenswrapper[3986]: E0318 08:47:05.339303 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.440370 master-0 kubenswrapper[3986]: E0318 08:47:05.440233 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.541288 master-0 kubenswrapper[3986]: E0318 08:47:05.541230 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.642440 master-0 kubenswrapper[3986]: E0318 08:47:05.642350 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.742888 master-0 kubenswrapper[3986]: E0318 08:47:05.742803 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.843950 master-0 kubenswrapper[3986]: E0318 08:47:05.843849 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:05.944334 master-0 kubenswrapper[3986]: E0318 08:47:05.944255 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.045332 master-0 kubenswrapper[3986]: E0318 08:47:06.045190 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.145918 master-0 kubenswrapper[3986]: E0318 08:47:06.145797 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.246238 master-0 kubenswrapper[3986]: E0318 08:47:06.246181 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.346710 master-0 kubenswrapper[3986]: E0318 08:47:06.346503 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.447774 master-0 kubenswrapper[3986]: E0318 08:47:06.447714 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.547983 master-0 kubenswrapper[3986]: E0318 08:47:06.547899 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.648097 master-0 kubenswrapper[3986]: E0318 08:47:06.647969 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.748616 master-0 kubenswrapper[3986]: E0318 08:47:06.748524 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.849786 master-0 kubenswrapper[3986]: E0318 08:47:06.849690 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:06.950053 master-0 kubenswrapper[3986]: E0318 08:47:06.949886 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.050894 master-0 kubenswrapper[3986]: E0318 08:47:07.050780 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.151457 master-0 kubenswrapper[3986]: E0318 08:47:07.151362 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.252384 master-0 kubenswrapper[3986]: E0318 08:47:07.252336 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.353102 master-0 kubenswrapper[3986]: E0318 08:47:07.352957 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.454077 master-0 kubenswrapper[3986]: E0318 08:47:07.453965 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.554962 master-0 kubenswrapper[3986]: E0318 08:47:07.554665 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.656021 master-0 kubenswrapper[3986]: E0318 08:47:07.655831 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.756303 master-0 kubenswrapper[3986]: E0318 08:47:07.756192 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.857241 master-0 kubenswrapper[3986]: E0318 08:47:07.857050 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:07.957964 master-0 kubenswrapper[3986]: E0318 08:47:07.957834 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.058707 master-0 kubenswrapper[3986]: E0318 08:47:08.058606 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.159950 master-0 kubenswrapper[3986]: E0318 08:47:08.159747 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.260349 master-0 kubenswrapper[3986]: E0318 08:47:08.260294 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.361098 master-0 kubenswrapper[3986]: E0318 08:47:08.361049 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.461381 master-0 kubenswrapper[3986]: E0318 08:47:08.461235 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.562027 master-0 kubenswrapper[3986]: E0318 08:47:08.561940 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.663027 master-0 kubenswrapper[3986]: E0318 08:47:08.662957 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.763322 master-0 kubenswrapper[3986]: E0318 08:47:08.763222 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.864376 master-0 kubenswrapper[3986]: E0318 08:47:08.864277 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:08.965546 master-0 kubenswrapper[3986]: E0318 08:47:08.965450 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.065898 master-0 kubenswrapper[3986]: E0318 08:47:09.065539 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.166614 master-0 kubenswrapper[3986]: E0318 08:47:09.166565 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.268069 master-0 kubenswrapper[3986]: E0318 08:47:09.268012 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.368316 master-0 kubenswrapper[3986]: E0318 08:47:09.368118 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.387560 master-0 kubenswrapper[3986]: E0318 08:47:09.387509 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:47:09.468979 master-0 kubenswrapper[3986]: E0318 08:47:09.468938 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.570109 master-0 kubenswrapper[3986]: E0318 08:47:09.570067 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.671612 master-0 kubenswrapper[3986]: E0318 08:47:09.671448 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.772777 master-0 kubenswrapper[3986]: E0318 08:47:09.772735 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.873758 master-0 kubenswrapper[3986]: E0318 08:47:09.873651 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:09.974909 master-0 kubenswrapper[3986]: E0318 08:47:09.974715 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.075779 master-0 kubenswrapper[3986]: E0318 08:47:10.075718 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.177197 master-0 kubenswrapper[3986]: E0318 08:47:10.177079 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.278303 master-0 kubenswrapper[3986]: E0318 08:47:10.278229 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.379079 master-0 kubenswrapper[3986]: E0318 08:47:10.379040 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.479951 master-0 kubenswrapper[3986]: E0318 08:47:10.479832 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.581476 master-0 kubenswrapper[3986]: E0318 08:47:10.581240 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.682458 master-0 kubenswrapper[3986]: E0318 08:47:10.682390 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.783583 master-0 kubenswrapper[3986]: E0318 08:47:10.783486 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.884694 master-0 kubenswrapper[3986]: E0318 08:47:10.884509 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:10.985710 master-0 kubenswrapper[3986]: E0318 08:47:10.985596 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.086606 master-0 kubenswrapper[3986]: E0318 08:47:11.086495 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.187053 master-0 kubenswrapper[3986]: E0318 08:47:11.186708 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.287770 master-0 kubenswrapper[3986]: E0318 08:47:11.287652 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.388533 master-0 kubenswrapper[3986]: E0318 08:47:11.388414 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.489682 master-0 kubenswrapper[3986]: E0318 08:47:11.489588 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.589816 master-0 kubenswrapper[3986]: E0318 08:47:11.589694 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.690058 master-0 kubenswrapper[3986]: E0318 08:47:11.689960 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.791239 master-0 kubenswrapper[3986]: E0318 08:47:11.791055 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.892213 master-0 kubenswrapper[3986]: E0318 08:47:11.892113 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:11.993174 master-0 kubenswrapper[3986]: E0318 08:47:11.993070 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.093527 master-0 kubenswrapper[3986]: E0318 08:47:12.093334 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.193905 master-0 kubenswrapper[3986]: E0318 08:47:12.193805 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.294644 master-0 kubenswrapper[3986]: E0318 08:47:12.294530 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.395311 master-0 kubenswrapper[3986]: E0318 08:47:12.395077 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.496007 master-0 kubenswrapper[3986]: E0318 08:47:12.495907 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.596619 master-0 kubenswrapper[3986]: E0318 08:47:12.596575 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.697808 master-0 kubenswrapper[3986]: E0318 08:47:12.697622 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.798037 master-0 kubenswrapper[3986]: E0318 08:47:12.797923 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.899309 master-0 kubenswrapper[3986]: E0318 08:47:12.898914 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:12.999627 master-0 kubenswrapper[3986]: E0318 08:47:12.999531 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.099972 master-0 kubenswrapper[3986]: E0318 08:47:13.099838 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.200995 master-0 kubenswrapper[3986]: E0318 08:47:13.200917 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.301846 master-0 kubenswrapper[3986]: E0318 08:47:13.301677 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.402689 master-0 kubenswrapper[3986]: E0318 08:47:13.402613 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.503137 master-0 kubenswrapper[3986]: E0318 08:47:13.503064 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.603983 master-0 kubenswrapper[3986]: E0318 08:47:13.603781 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.677332 master-0 kubenswrapper[3986]: E0318 08:47:13.677235 3986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 08:47:13.704439 master-0 kubenswrapper[3986]: E0318 08:47:13.704346 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.804630 master-0 kubenswrapper[3986]: E0318 08:47:13.804517 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:13.905606 master-0 kubenswrapper[3986]: E0318 08:47:13.905437 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.005718 master-0 kubenswrapper[3986]: E0318 08:47:14.005632 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.106022 master-0 kubenswrapper[3986]: E0318 08:47:14.105818 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.207021 master-0 kubenswrapper[3986]: E0318 08:47:14.206769 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.307671 master-0 kubenswrapper[3986]: E0318 08:47:14.307598 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.408748 master-0 kubenswrapper[3986]: E0318 08:47:14.408538 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.508802 master-0 kubenswrapper[3986]: E0318 08:47:14.508683 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.609308 master-0 kubenswrapper[3986]: E0318 08:47:14.609229 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.709848 master-0 kubenswrapper[3986]: E0318 08:47:14.709736 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.810481 master-0 kubenswrapper[3986]: E0318 08:47:14.810290 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:14.910778 master-0 kubenswrapper[3986]: E0318 08:47:14.910671 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.011015 master-0 kubenswrapper[3986]: E0318 08:47:15.010943 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.112087 master-0 kubenswrapper[3986]: E0318 08:47:15.111950 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.212342 master-0 kubenswrapper[3986]: E0318 08:47:15.212271 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.312845 master-0 kubenswrapper[3986]: E0318 08:47:15.312771 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.414022 master-0 kubenswrapper[3986]: E0318 08:47:15.413840 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.515061 master-0 kubenswrapper[3986]: E0318 08:47:15.514991 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.615921 master-0 kubenswrapper[3986]: E0318 08:47:15.615817 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.716416 master-0 kubenswrapper[3986]: E0318 08:47:15.716242 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.817518 master-0 kubenswrapper[3986]: E0318 08:47:15.817409 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:15.918562 master-0 kubenswrapper[3986]: E0318 08:47:15.918425 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.019478 master-0 kubenswrapper[3986]: E0318 08:47:16.019377 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.120334 master-0 kubenswrapper[3986]: E0318 08:47:16.120222 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.221306 master-0 kubenswrapper[3986]: E0318 08:47:16.221232 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.322483 master-0 kubenswrapper[3986]: E0318 08:47:16.322324 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.423495 master-0 kubenswrapper[3986]: E0318 08:47:16.423403 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.524451 master-0 kubenswrapper[3986]: E0318 08:47:16.524329 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.624640 master-0 kubenswrapper[3986]: E0318 08:47:16.624497 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.725010 master-0 kubenswrapper[3986]: E0318 08:47:16.724930 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.825187 master-0 kubenswrapper[3986]: E0318 08:47:16.825075 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:16.925977 master-0 kubenswrapper[3986]: E0318 08:47:16.925810 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.026639 master-0 kubenswrapper[3986]: E0318 08:47:17.026571 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.127581 master-0 kubenswrapper[3986]: E0318 08:47:17.127534 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.228729 master-0 kubenswrapper[3986]: E0318 08:47:17.228571 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.329633 master-0 kubenswrapper[3986]: E0318 08:47:17.329553 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.430779 master-0 kubenswrapper[3986]: E0318 08:47:17.430665 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.531702 master-0 kubenswrapper[3986]: E0318 08:47:17.531636 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.632337 master-0 kubenswrapper[3986]: E0318 08:47:17.632259 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.733468 master-0 kubenswrapper[3986]: E0318 08:47:17.733384 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.834513 master-0 kubenswrapper[3986]: E0318 08:47:17.834329 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:17.935678 master-0 kubenswrapper[3986]: E0318 08:47:17.935565 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.036530 master-0 kubenswrapper[3986]: E0318 08:47:18.036434 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.136806 master-0 kubenswrapper[3986]: E0318 08:47:18.136632 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.237476 master-0 kubenswrapper[3986]: E0318 08:47:18.237376 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.338616 master-0 kubenswrapper[3986]: E0318 08:47:18.338523 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.439475 master-0 kubenswrapper[3986]: E0318 08:47:18.439305 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.539714 master-0 kubenswrapper[3986]: E0318 08:47:18.539624 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.639835 master-0 kubenswrapper[3986]: E0318 08:47:18.639750 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.741001 master-0 kubenswrapper[3986]: E0318 08:47:18.740912 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.841685 master-0 kubenswrapper[3986]: E0318 08:47:18.841581 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:18.942345 master-0 kubenswrapper[3986]: E0318 08:47:18.942281 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.043528 master-0 kubenswrapper[3986]: E0318 08:47:19.043399 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.144528 master-0 kubenswrapper[3986]: E0318 08:47:19.144454 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.245280 master-0 kubenswrapper[3986]: E0318 08:47:19.245172 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.345561 master-0 kubenswrapper[3986]: E0318 08:47:19.345375 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.388371 master-0 kubenswrapper[3986]: E0318 08:47:19.388271 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:47:19.446035 master-0 kubenswrapper[3986]: E0318 08:47:19.445947 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.546619 master-0 kubenswrapper[3986]: E0318 08:47:19.546509 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.647017 master-0 kubenswrapper[3986]: E0318 08:47:19.646735 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.747580 master-0 kubenswrapper[3986]: E0318 08:47:19.747493 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.848640 master-0 kubenswrapper[3986]: E0318 08:47:19.848527 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:19.949794 master-0 kubenswrapper[3986]: E0318 08:47:19.949550 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.050759 master-0 kubenswrapper[3986]: E0318 08:47:20.050641 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.152026 master-0 kubenswrapper[3986]: E0318 08:47:20.151887 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.252459 master-0 kubenswrapper[3986]: E0318 08:47:20.252358 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.352739 master-0 kubenswrapper[3986]: E0318 08:47:20.352628 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.453033 master-0 kubenswrapper[3986]: E0318 08:47:20.452838 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.553999 master-0 kubenswrapper[3986]: E0318 08:47:20.553790 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.654986 master-0 kubenswrapper[3986]: E0318 08:47:20.654814 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.755829 master-0 kubenswrapper[3986]: E0318 08:47:20.755713 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.856934 master-0 kubenswrapper[3986]: E0318 08:47:20.856737 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:20.957763 master-0 kubenswrapper[3986]: E0318 08:47:20.957670 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.058607 master-0 kubenswrapper[3986]: E0318 08:47:21.058536 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.159262 master-0 kubenswrapper[3986]: E0318 08:47:21.159083 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.260064 master-0 kubenswrapper[3986]: E0318 08:47:21.259970 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.360901 master-0 kubenswrapper[3986]: E0318 08:47:21.360779 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.461897 master-0 kubenswrapper[3986]: E0318 08:47:21.461711 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.562978 master-0 kubenswrapper[3986]: E0318 08:47:21.562896 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.664053 master-0 kubenswrapper[3986]: E0318 08:47:21.663987 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.764216 master-0 kubenswrapper[3986]: E0318 08:47:21.764115 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.864515 master-0 kubenswrapper[3986]: E0318 08:47:21.864400 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:21.965618 master-0 kubenswrapper[3986]: E0318 08:47:21.965501 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.065942 master-0 kubenswrapper[3986]: E0318 08:47:22.065746 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.166253 master-0 kubenswrapper[3986]: E0318 08:47:22.166206 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.267346 master-0 kubenswrapper[3986]: E0318 08:47:22.267280 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.368207 master-0 kubenswrapper[3986]: E0318 08:47:22.368095 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.427561 master-0 kubenswrapper[3986]: I0318 08:47:22.426837 3986 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:47:22.428390 master-0 kubenswrapper[3986]: I0318 08:47:22.428331 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:47:22.428479 master-0 kubenswrapper[3986]: I0318 08:47:22.428402 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:47:22.428479 master-0 kubenswrapper[3986]: I0318 08:47:22.428423 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:47:22.468689 master-0 kubenswrapper[3986]: E0318 08:47:22.468642 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.568835 master-0 kubenswrapper[3986]: E0318 08:47:22.568762 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.669102 master-0 kubenswrapper[3986]: E0318 08:47:22.668945 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.769268 master-0 kubenswrapper[3986]: E0318 08:47:22.769151 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.870452 master-0 kubenswrapper[3986]: E0318 08:47:22.870349 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:22.971375 master-0 kubenswrapper[3986]: E0318 08:47:22.971233 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.072248 master-0 kubenswrapper[3986]: E0318 08:47:23.072148 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.172564 master-0 kubenswrapper[3986]: E0318 08:47:23.172465 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.273254 master-0 kubenswrapper[3986]: E0318 08:47:23.273196 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.374016 master-0 kubenswrapper[3986]: E0318 08:47:23.373929 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.475199 master-0 kubenswrapper[3986]: E0318 08:47:23.475065 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.575725 master-0 kubenswrapper[3986]: E0318 08:47:23.575539 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.676346 master-0 kubenswrapper[3986]: E0318 08:47:23.676283 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.776472 master-0 kubenswrapper[3986]: E0318 08:47:23.776424 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.805767 master-0 kubenswrapper[3986]: E0318 08:47:23.805692 3986 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 08:47:23.877362 master-0 kubenswrapper[3986]: E0318 08:47:23.877150 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:23.978141 master-0 kubenswrapper[3986]: E0318 08:47:23.978104 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.078619 master-0 kubenswrapper[3986]: E0318 08:47:24.078543 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.179737 master-0 kubenswrapper[3986]: E0318 08:47:24.179573 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.280181 master-0 kubenswrapper[3986]: E0318 08:47:24.280133 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.381076 master-0 kubenswrapper[3986]: E0318 08:47:24.381042 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.481892 master-0 kubenswrapper[3986]: E0318 08:47:24.481772 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.583060 master-0 kubenswrapper[3986]: E0318 08:47:24.583008 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.684344 master-0 kubenswrapper[3986]: E0318 08:47:24.684284 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.785071 master-0 kubenswrapper[3986]: E0318 08:47:24.785003 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.885761 master-0 kubenswrapper[3986]: E0318 08:47:24.885697 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:24.986487 master-0 kubenswrapper[3986]: E0318 08:47:24.986373 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.086691 master-0 kubenswrapper[3986]: E0318 08:47:25.086508 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.187422 master-0 kubenswrapper[3986]: E0318 08:47:25.187342 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.288501 master-0 kubenswrapper[3986]: E0318 08:47:25.288414 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.388663 master-0 kubenswrapper[3986]: E0318 08:47:25.388526 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.489239 master-0 kubenswrapper[3986]: E0318 08:47:25.489147 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.590278 master-0 kubenswrapper[3986]: E0318 08:47:25.590170 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.691263 master-0 kubenswrapper[3986]: E0318 08:47:25.691079 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.791957 master-0 kubenswrapper[3986]: E0318 08:47:25.791833 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.893141 master-0 kubenswrapper[3986]: E0318 08:47:25.893065 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:25.994056 master-0 kubenswrapper[3986]: E0318 08:47:25.993990 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.094933 master-0 kubenswrapper[3986]: E0318 08:47:26.094819 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.195247 master-0 kubenswrapper[3986]: E0318 08:47:26.195124 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.295454 master-0 kubenswrapper[3986]: E0318 08:47:26.295260 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.396268 master-0 kubenswrapper[3986]: E0318 08:47:26.396181 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.496704 master-0 kubenswrapper[3986]: E0318 08:47:26.496608 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.597288 master-0 kubenswrapper[3986]: E0318 08:47:26.597097 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.698016 master-0 kubenswrapper[3986]: E0318 08:47:26.697918 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.798690 master-0 kubenswrapper[3986]: E0318 08:47:26.798589 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.899546 master-0 kubenswrapper[3986]: E0318 08:47:26.899390 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:26.999597 master-0 kubenswrapper[3986]: E0318 08:47:26.999504 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.100710 master-0 kubenswrapper[3986]: E0318 08:47:27.100604 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.201017 master-0 kubenswrapper[3986]: E0318 08:47:27.200826 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.301885 master-0 kubenswrapper[3986]: E0318 08:47:27.301801 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.403049 master-0 kubenswrapper[3986]: E0318 08:47:27.402955 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.503128 master-0 kubenswrapper[3986]: E0318 08:47:27.503054 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.604055 master-0 kubenswrapper[3986]: E0318 08:47:27.603946 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.705067 master-0 kubenswrapper[3986]: E0318 08:47:27.704954 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.805314 master-0 kubenswrapper[3986]: E0318 08:47:27.805120 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:27.906191 master-0 kubenswrapper[3986]: E0318 08:47:27.906108 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.006276 master-0 kubenswrapper[3986]: E0318 08:47:28.006207 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.107452 master-0 kubenswrapper[3986]: E0318 08:47:28.107234 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.208077 master-0 kubenswrapper[3986]: E0318 08:47:28.207932 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.308378 master-0 kubenswrapper[3986]: E0318 08:47:28.308290 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.409586 master-0 kubenswrapper[3986]: E0318 08:47:28.409361 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.509679 master-0 kubenswrapper[3986]: E0318 08:47:28.509567 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.610791 master-0 kubenswrapper[3986]: E0318 08:47:28.610661 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.711701 master-0 kubenswrapper[3986]: E0318 08:47:28.711505 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.812154 master-0 kubenswrapper[3986]: E0318 08:47:28.812033 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:28.913040 master-0 kubenswrapper[3986]: E0318 08:47:28.912939 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.014030 master-0 kubenswrapper[3986]: E0318 08:47:29.013912 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.114230 master-0 kubenswrapper[3986]: E0318 08:47:29.114083 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.215197 master-0 kubenswrapper[3986]: E0318 08:47:29.215064 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.316066 master-0 kubenswrapper[3986]: E0318 08:47:29.315831 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.388906 master-0 kubenswrapper[3986]: E0318 08:47:29.388794 3986 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:47:29.416691 master-0 kubenswrapper[3986]: E0318 08:47:29.416600 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.516794 master-0 kubenswrapper[3986]: E0318 08:47:29.516705 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.617915 master-0 kubenswrapper[3986]: E0318 08:47:29.617733 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.718998 master-0 kubenswrapper[3986]: E0318 08:47:29.718889 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.820178 master-0 kubenswrapper[3986]: E0318 08:47:29.820069 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:29.921119 master-0 kubenswrapper[3986]: E0318 08:47:29.920896 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:30.021785 master-0 kubenswrapper[3986]: E0318 08:47:30.021661 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:30.122625 master-0 kubenswrapper[3986]: E0318 08:47:30.122524 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:30.223465 master-0 kubenswrapper[3986]: E0318 08:47:30.223276 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:30.324364 master-0 kubenswrapper[3986]: E0318 08:47:30.324281 3986 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:30.382211 master-0 kubenswrapper[3986]: I0318 08:47:30.382109 3986 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 08:47:31.264651 master-0 kubenswrapper[3986]: I0318 08:47:31.264563 3986 apiserver.go:52] "Watching apiserver" Mar 18 08:47:31.271287 master-0 kubenswrapper[3986]: I0318 08:47:31.271237 3986 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 08:47:31.271592 master-0 kubenswrapper[3986]: I0318 08:47:31.271540 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-zq2ds","openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg","openshift-network-operator/network-operator-7bd846bfc4-5r5r4"] Mar 18 08:47:31.272898 master-0 kubenswrapper[3986]: I0318 08:47:31.272833 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.273290 master-0 kubenswrapper[3986]: I0318 08:47:31.273213 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.273418 master-0 kubenswrapper[3986]: I0318 08:47:31.273063 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.277277 master-0 kubenswrapper[3986]: I0318 08:47:31.276826 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 18 08:47:31.277277 master-0 kubenswrapper[3986]: I0318 08:47:31.277120 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 08:47:31.282506 master-0 kubenswrapper[3986]: I0318 08:47:31.282194 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 18 08:47:31.282506 master-0 kubenswrapper[3986]: I0318 08:47:31.282321 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 08:47:31.282506 master-0 kubenswrapper[3986]: I0318 08:47:31.282383 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 08:47:31.283168 master-0 kubenswrapper[3986]: I0318 08:47:31.282748 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 08:47:31.283764 master-0 kubenswrapper[3986]: I0318 08:47:31.283587 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 18 08:47:31.283764 master-0 kubenswrapper[3986]: I0318 08:47:31.283657 3986 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 18 08:47:31.283764 master-0 kubenswrapper[3986]: I0318 08:47:31.283753 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 08:47:31.284612 master-0 kubenswrapper[3986]: I0318 08:47:31.284556 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 08:47:31.342292 master-0 kubenswrapper[3986]: I0318 08:47:31.342245 3986 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 08:47:31.404888 master-0 kubenswrapper[3986]: I0318 08:47:31.404810 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ngk7\" (UniqueName: \"kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.404888 master-0 kubenswrapper[3986]: I0318 08:47:31.404887 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-ca-bundle\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.404888 master-0 kubenswrapper[3986]: I0318 08:47:31.404910 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-sno-bootstrap-files\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.404952 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.404988 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.405008 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.405025 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.405055 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-resolv-conf\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.405072 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.405092 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbcnl\" (UniqueName: \"kubernetes.io/projected/97215428-2d5d-460f-947c-f2a490bc428d-kube-api-access-xbcnl\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.405107 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.405123 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-var-run-resolv-conf\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.405203 master-0 kubenswrapper[3986]: I0318 08:47:31.405138 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.506306 master-0 kubenswrapper[3986]: I0318 08:47:31.506240 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbcnl\" (UniqueName: \"kubernetes.io/projected/97215428-2d5d-460f-947c-f2a490bc428d-kube-api-access-xbcnl\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.506386 master-0 kubenswrapper[3986]: I0318 08:47:31.506334 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.506428 master-0 kubenswrapper[3986]: I0318 08:47:31.506390 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-var-run-resolv-conf\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.506727 master-0 kubenswrapper[3986]: I0318 08:47:31.506489 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.506727 master-0 kubenswrapper[3986]: I0318 08:47:31.506557 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-ca-bundle\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.506727 master-0 kubenswrapper[3986]: I0318 08:47:31.506580 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-sno-bootstrap-files\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.506727 master-0 kubenswrapper[3986]: I0318 08:47:31.506591 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-var-run-resolv-conf\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.506727 master-0 kubenswrapper[3986]: I0318 08:47:31.506604 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ngk7\" (UniqueName: \"kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.506727 master-0 kubenswrapper[3986]: I0318 08:47:31.506662 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.506727 master-0 kubenswrapper[3986]: I0318 08:47:31.506668 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.507206 master-0 kubenswrapper[3986]: I0318 08:47:31.506753 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-sno-bootstrap-files\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.507206 master-0 kubenswrapper[3986]: I0318 08:47:31.506830 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.507206 master-0 kubenswrapper[3986]: I0318 08:47:31.506898 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-ca-bundle\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.507206 master-0 kubenswrapper[3986]: I0318 08:47:31.507049 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.507206 master-0 kubenswrapper[3986]: I0318 08:47:31.507084 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.507206 master-0 kubenswrapper[3986]: I0318 08:47:31.507113 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-resolv-conf\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.507206 master-0 kubenswrapper[3986]: I0318 08:47:31.507135 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.507206 master-0 kubenswrapper[3986]: I0318 08:47:31.507193 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.507419 master-0 kubenswrapper[3986]: I0318 08:47:31.507243 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.507419 master-0 kubenswrapper[3986]: E0318 08:47:31.507321 3986 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:31.507419 master-0 kubenswrapper[3986]: E0318 08:47:31.507378 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:47:32.007357208 +0000 UTC m=+83.414527300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:31.507608 master-0 kubenswrapper[3986]: I0318 08:47:31.507548 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-resolv-conf\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.507691 master-0 kubenswrapper[3986]: I0318 08:47:31.507644 3986 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 08:47:31.508723 master-0 kubenswrapper[3986]: I0318 08:47:31.508599 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.516098 master-0 kubenswrapper[3986]: I0318 08:47:31.516002 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.530885 master-0 kubenswrapper[3986]: I0318 08:47:31.530773 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbcnl\" (UniqueName: \"kubernetes.io/projected/97215428-2d5d-460f-947c-f2a490bc428d-kube-api-access-xbcnl\") pod \"assisted-installer-controller-zq2ds\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.532217 master-0 kubenswrapper[3986]: I0318 08:47:31.532158 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ngk7\" (UniqueName: \"kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.537953 master-0 kubenswrapper[3986]: I0318 08:47:31.537889 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:31.633953 master-0 kubenswrapper[3986]: I0318 08:47:31.633740 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:31.649831 master-0 kubenswrapper[3986]: W0318 08:47:31.649717 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97215428_2d5d_460f_947c_f2a490bc428d.slice/crio-c86f0daa1af8b571957ffb1df5a750b21d97fe93761c60692060e0a17515fcbd WatchSource:0}: Error finding container c86f0daa1af8b571957ffb1df5a750b21d97fe93761c60692060e0a17515fcbd: Status 404 returned error can't find the container with id c86f0daa1af8b571957ffb1df5a750b21d97fe93761c60692060e0a17515fcbd Mar 18 08:47:31.690885 master-0 kubenswrapper[3986]: I0318 08:47:31.690423 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:47:31.702724 master-0 kubenswrapper[3986]: W0318 08:47:31.702651 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07a4fd92_0fd1_4688_b2db_de615d75971e.slice/crio-d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671 WatchSource:0}: Error finding container d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671: Status 404 returned error can't find the container with id d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671 Mar 18 08:47:31.749091 master-0 kubenswrapper[3986]: I0318 08:47:31.749027 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" event={"ID":"07a4fd92-0fd1-4688-b2db-de615d75971e","Type":"ContainerStarted","Data":"d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671"} Mar 18 08:47:31.750633 master-0 kubenswrapper[3986]: I0318 08:47:31.750580 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-zq2ds" event={"ID":"97215428-2d5d-460f-947c-f2a490bc428d","Type":"ContainerStarted","Data":"c86f0daa1af8b571957ffb1df5a750b21d97fe93761c60692060e0a17515fcbd"} Mar 18 08:47:32.010022 master-0 kubenswrapper[3986]: I0318 08:47:32.009929 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:32.010283 master-0 kubenswrapper[3986]: E0318 08:47:32.010168 3986 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:32.010380 master-0 kubenswrapper[3986]: E0318 08:47:32.010290 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:47:33.010255972 +0000 UTC m=+84.417426084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:33.015872 master-0 kubenswrapper[3986]: I0318 08:47:33.015821 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:33.016354 master-0 kubenswrapper[3986]: E0318 08:47:33.015954 3986 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:33.016354 master-0 kubenswrapper[3986]: E0318 08:47:33.015999 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:47:35.015983388 +0000 UTC m=+86.423153470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:35.031273 master-0 kubenswrapper[3986]: I0318 08:47:35.031205 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:35.032295 master-0 kubenswrapper[3986]: E0318 08:47:35.031450 3986 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:35.032295 master-0 kubenswrapper[3986]: E0318 08:47:35.031847 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:47:39.031570354 +0000 UTC m=+90.438740446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:36.766200 master-0 kubenswrapper[3986]: I0318 08:47:36.766138 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" event={"ID":"07a4fd92-0fd1-4688-b2db-de615d75971e","Type":"ContainerStarted","Data":"20bac68a3a787cd3ab838f8bf47eee1e23fd920610fa248db61e044af450ce49"} Mar 18 08:47:36.768847 master-0 kubenswrapper[3986]: I0318 08:47:36.768798 3986 generic.go:334] "Generic (PLEG): container finished" podID="97215428-2d5d-460f-947c-f2a490bc428d" containerID="af45d378024ee7c220ba697e8109094cfb054515091d9efd5c22113a8f02ec12" exitCode=0 Mar 18 08:47:36.768968 master-0 kubenswrapper[3986]: I0318 08:47:36.768898 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-zq2ds" event={"ID":"97215428-2d5d-460f-947c-f2a490bc428d","Type":"ContainerDied","Data":"af45d378024ee7c220ba697e8109094cfb054515091d9efd5c22113a8f02ec12"} Mar 18 08:47:36.784406 master-0 kubenswrapper[3986]: I0318 08:47:36.783978 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" podStartSLOduration=49.186119145 podStartE2EDuration="53.783956737s" podCreationTimestamp="2026-03-18 08:46:43 +0000 UTC" firstStartedPulling="2026-03-18 08:47:31.704670304 +0000 UTC m=+83.111840396" lastFinishedPulling="2026-03-18 08:47:36.302507866 +0000 UTC m=+87.709677988" observedRunningTime="2026-03-18 08:47:36.783577857 +0000 UTC m=+88.190747959" watchObservedRunningTime="2026-03-18 08:47:36.783956737 +0000 UTC m=+88.191126889" Mar 18 08:47:37.786744 master-0 kubenswrapper[3986]: I0318 08:47:37.786690 3986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:37.855896 master-0 kubenswrapper[3986]: I0318 08:47:37.855802 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-sno-bootstrap-files\") pod \"97215428-2d5d-460f-947c-f2a490bc428d\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " Mar 18 08:47:37.856079 master-0 kubenswrapper[3986]: I0318 08:47:37.855896 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-resolv-conf\") pod \"97215428-2d5d-460f-947c-f2a490bc428d\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " Mar 18 08:47:37.856079 master-0 kubenswrapper[3986]: I0318 08:47:37.855928 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "97215428-2d5d-460f-947c-f2a490bc428d" (UID: "97215428-2d5d-460f-947c-f2a490bc428d"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:37.856079 master-0 kubenswrapper[3986]: I0318 08:47:37.855947 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-ca-bundle\") pod \"97215428-2d5d-460f-947c-f2a490bc428d\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " Mar 18 08:47:37.856079 master-0 kubenswrapper[3986]: I0318 08:47:37.855989 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "97215428-2d5d-460f-947c-f2a490bc428d" (UID: "97215428-2d5d-460f-947c-f2a490bc428d"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:37.856079 master-0 kubenswrapper[3986]: I0318 08:47:37.856042 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbcnl\" (UniqueName: \"kubernetes.io/projected/97215428-2d5d-460f-947c-f2a490bc428d-kube-api-access-xbcnl\") pod \"97215428-2d5d-460f-947c-f2a490bc428d\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " Mar 18 08:47:37.856471 master-0 kubenswrapper[3986]: I0318 08:47:37.856062 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "97215428-2d5d-460f-947c-f2a490bc428d" (UID: "97215428-2d5d-460f-947c-f2a490bc428d"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:37.856471 master-0 kubenswrapper[3986]: I0318 08:47:37.856097 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-var-run-resolv-conf\") pod \"97215428-2d5d-460f-947c-f2a490bc428d\" (UID: \"97215428-2d5d-460f-947c-f2a490bc428d\") " Mar 18 08:47:37.856471 master-0 kubenswrapper[3986]: I0318 08:47:37.856125 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "97215428-2d5d-460f-947c-f2a490bc428d" (UID: "97215428-2d5d-460f-947c-f2a490bc428d"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:37.856471 master-0 kubenswrapper[3986]: I0318 08:47:37.856279 3986 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:37.856471 master-0 kubenswrapper[3986]: I0318 08:47:37.856306 3986 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:37.856471 master-0 kubenswrapper[3986]: I0318 08:47:37.856323 3986 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:37.856471 master-0 kubenswrapper[3986]: I0318 08:47:37.856338 3986 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/97215428-2d5d-460f-947c-f2a490bc428d-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:37.859926 master-0 kubenswrapper[3986]: I0318 08:47:37.859872 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97215428-2d5d-460f-947c-f2a490bc428d-kube-api-access-xbcnl" (OuterVolumeSpecName: "kube-api-access-xbcnl") pod "97215428-2d5d-460f-947c-f2a490bc428d" (UID: "97215428-2d5d-460f-947c-f2a490bc428d"). InnerVolumeSpecName "kube-api-access-xbcnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:47:37.956665 master-0 kubenswrapper[3986]: I0318 08:47:37.956594 3986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbcnl\" (UniqueName: \"kubernetes.io/projected/97215428-2d5d-460f-947c-f2a490bc428d-kube-api-access-xbcnl\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:38.444593 master-0 kubenswrapper[3986]: I0318 08:47:38.444485 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 08:47:38.776132 master-0 kubenswrapper[3986]: I0318 08:47:38.776015 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-zq2ds" event={"ID":"97215428-2d5d-460f-947c-f2a490bc428d","Type":"ContainerDied","Data":"c86f0daa1af8b571957ffb1df5a750b21d97fe93761c60692060e0a17515fcbd"} Mar 18 08:47:38.776132 master-0 kubenswrapper[3986]: I0318 08:47:38.776065 3986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:47:38.776132 master-0 kubenswrapper[3986]: I0318 08:47:38.776079 3986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c86f0daa1af8b571957ffb1df5a750b21d97fe93761c60692060e0a17515fcbd" Mar 18 08:47:38.816514 master-0 kubenswrapper[3986]: I0318 08:47:38.816395 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=0.816341858 podStartE2EDuration="816.341858ms" podCreationTimestamp="2026-03-18 08:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:47:38.816105982 +0000 UTC m=+90.223276094" watchObservedRunningTime="2026-03-18 08:47:38.816341858 +0000 UTC m=+90.223511970" Mar 18 08:47:39.002646 master-0 kubenswrapper[3986]: I0318 08:47:39.002590 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-xrcjr"] Mar 18 08:47:39.002901 master-0 kubenswrapper[3986]: E0318 08:47:39.002689 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 08:47:39.002901 master-0 kubenswrapper[3986]: I0318 08:47:39.002705 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 08:47:39.002901 master-0 kubenswrapper[3986]: I0318 08:47:39.002735 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 08:47:39.003029 master-0 kubenswrapper[3986]: I0318 08:47:39.002968 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-xrcjr" Mar 18 08:47:39.066798 master-0 kubenswrapper[3986]: I0318 08:47:39.066649 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:39.066798 master-0 kubenswrapper[3986]: I0318 08:47:39.066749 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhhk\" (UniqueName: \"kubernetes.io/projected/51cee994-bbd7-45f2-9757-c270d47c276a-kube-api-access-9jhhk\") pod \"mtu-prober-xrcjr\" (UID: \"51cee994-bbd7-45f2-9757-c270d47c276a\") " pod="openshift-network-operator/mtu-prober-xrcjr" Mar 18 08:47:39.067062 master-0 kubenswrapper[3986]: E0318 08:47:39.066910 3986 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:39.067062 master-0 kubenswrapper[3986]: E0318 08:47:39.067042 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:47:47.067001195 +0000 UTC m=+98.474171437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:39.168207 master-0 kubenswrapper[3986]: I0318 08:47:39.168072 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhhk\" (UniqueName: \"kubernetes.io/projected/51cee994-bbd7-45f2-9757-c270d47c276a-kube-api-access-9jhhk\") pod \"mtu-prober-xrcjr\" (UID: \"51cee994-bbd7-45f2-9757-c270d47c276a\") " pod="openshift-network-operator/mtu-prober-xrcjr" Mar 18 08:47:39.195887 master-0 kubenswrapper[3986]: I0318 08:47:39.195766 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhhk\" (UniqueName: \"kubernetes.io/projected/51cee994-bbd7-45f2-9757-c270d47c276a-kube-api-access-9jhhk\") pod \"mtu-prober-xrcjr\" (UID: \"51cee994-bbd7-45f2-9757-c270d47c276a\") " pod="openshift-network-operator/mtu-prober-xrcjr" Mar 18 08:47:39.315814 master-0 kubenswrapper[3986]: I0318 08:47:39.315702 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-xrcjr" Mar 18 08:47:39.330946 master-0 kubenswrapper[3986]: W0318 08:47:39.330837 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51cee994_bbd7_45f2_9757_c270d47c276a.slice/crio-e405be03e85526b1d05a9e6638d9433f5fcf432c4e04e5890d5bc45664d267c7 WatchSource:0}: Error finding container e405be03e85526b1d05a9e6638d9433f5fcf432c4e04e5890d5bc45664d267c7: Status 404 returned error can't find the container with id e405be03e85526b1d05a9e6638d9433f5fcf432c4e04e5890d5bc45664d267c7 Mar 18 08:47:39.436190 master-0 kubenswrapper[3986]: I0318 08:47:39.436127 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 08:47:39.781994 master-0 kubenswrapper[3986]: I0318 08:47:39.781903 3986 generic.go:334] "Generic (PLEG): container finished" podID="51cee994-bbd7-45f2-9757-c270d47c276a" containerID="51dc55afbcfce4c386c5bd0bc1deafcfc0ec711be4ef96fdaaef56b5f72c67a2" exitCode=0 Mar 18 08:47:39.782269 master-0 kubenswrapper[3986]: I0318 08:47:39.782021 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-xrcjr" event={"ID":"51cee994-bbd7-45f2-9757-c270d47c276a","Type":"ContainerDied","Data":"51dc55afbcfce4c386c5bd0bc1deafcfc0ec711be4ef96fdaaef56b5f72c67a2"} Mar 18 08:47:39.782269 master-0 kubenswrapper[3986]: I0318 08:47:39.782095 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-xrcjr" event={"ID":"51cee994-bbd7-45f2-9757-c270d47c276a","Type":"ContainerStarted","Data":"e405be03e85526b1d05a9e6638d9433f5fcf432c4e04e5890d5bc45664d267c7"} Mar 18 08:47:39.803965 master-0 kubenswrapper[3986]: I0318 08:47:39.803602 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=0.803541856 podStartE2EDuration="803.541856ms" podCreationTimestamp="2026-03-18 08:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:47:39.803066454 +0000 UTC m=+91.210236576" watchObservedRunningTime="2026-03-18 08:47:39.803541856 +0000 UTC m=+91.210711978" Mar 18 08:47:40.800230 master-0 kubenswrapper[3986]: I0318 08:47:40.800173 3986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-xrcjr" Mar 18 08:47:40.883644 master-0 kubenswrapper[3986]: I0318 08:47:40.883525 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jhhk\" (UniqueName: \"kubernetes.io/projected/51cee994-bbd7-45f2-9757-c270d47c276a-kube-api-access-9jhhk\") pod \"51cee994-bbd7-45f2-9757-c270d47c276a\" (UID: \"51cee994-bbd7-45f2-9757-c270d47c276a\") " Mar 18 08:47:40.887467 master-0 kubenswrapper[3986]: I0318 08:47:40.887369 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51cee994-bbd7-45f2-9757-c270d47c276a-kube-api-access-9jhhk" (OuterVolumeSpecName: "kube-api-access-9jhhk") pod "51cee994-bbd7-45f2-9757-c270d47c276a" (UID: "51cee994-bbd7-45f2-9757-c270d47c276a"). InnerVolumeSpecName "kube-api-access-9jhhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:47:40.985056 master-0 kubenswrapper[3986]: I0318 08:47:40.984978 3986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jhhk\" (UniqueName: \"kubernetes.io/projected/51cee994-bbd7-45f2-9757-c270d47c276a-kube-api-access-9jhhk\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:41.790036 master-0 kubenswrapper[3986]: I0318 08:47:41.789974 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-xrcjr" event={"ID":"51cee994-bbd7-45f2-9757-c270d47c276a","Type":"ContainerDied","Data":"e405be03e85526b1d05a9e6638d9433f5fcf432c4e04e5890d5bc45664d267c7"} Mar 18 08:47:41.790036 master-0 kubenswrapper[3986]: I0318 08:47:41.790031 3986 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e405be03e85526b1d05a9e6638d9433f5fcf432c4e04e5890d5bc45664d267c7" Mar 18 08:47:41.790322 master-0 kubenswrapper[3986]: I0318 08:47:41.790107 3986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-xrcjr" Mar 18 08:47:43.451699 master-0 kubenswrapper[3986]: I0318 08:47:43.451607 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 08:47:44.011543 master-0 kubenswrapper[3986]: I0318 08:47:44.011472 3986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-xrcjr"] Mar 18 08:47:44.011824 master-0 kubenswrapper[3986]: I0318 08:47:44.011564 3986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-xrcjr"] Mar 18 08:47:45.433515 master-0 kubenswrapper[3986]: I0318 08:47:45.433385 3986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51cee994-bbd7-45f2-9757-c270d47c276a" path="/var/lib/kubelet/pods/51cee994-bbd7-45f2-9757-c270d47c276a/volumes" Mar 18 08:47:47.135594 master-0 kubenswrapper[3986]: I0318 08:47:47.135469 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:47:47.136345 master-0 kubenswrapper[3986]: E0318 08:47:47.135657 3986 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:47.136345 master-0 kubenswrapper[3986]: E0318 08:47:47.135738 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:03.135715423 +0000 UTC m=+114.542885505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:48.887719 master-0 kubenswrapper[3986]: I0318 08:47:48.887629 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bpf5c"] Mar 18 08:47:48.888480 master-0 kubenswrapper[3986]: E0318 08:47:48.887805 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cee994-bbd7-45f2-9757-c270d47c276a" containerName="prober" Mar 18 08:47:48.888480 master-0 kubenswrapper[3986]: I0318 08:47:48.887835 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cee994-bbd7-45f2-9757-c270d47c276a" containerName="prober" Mar 18 08:47:48.888480 master-0 kubenswrapper[3986]: I0318 08:47:48.887955 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cee994-bbd7-45f2-9757-c270d47c276a" containerName="prober" Mar 18 08:47:48.888480 master-0 kubenswrapper[3986]: I0318 08:47:48.888362 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.891830 master-0 kubenswrapper[3986]: I0318 08:47:48.891770 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 08:47:48.893146 master-0 kubenswrapper[3986]: I0318 08:47:48.893088 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 08:47:48.893415 master-0 kubenswrapper[3986]: I0318 08:47:48.893366 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 08:47:48.893499 master-0 kubenswrapper[3986]: I0318 08:47:48.893445 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 08:47:48.949018 master-0 kubenswrapper[3986]: I0318 08:47:48.948940 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949247 master-0 kubenswrapper[3986]: I0318 08:47:48.949023 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949247 master-0 kubenswrapper[3986]: I0318 08:47:48.949101 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949247 master-0 kubenswrapper[3986]: I0318 08:47:48.949195 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949431 master-0 kubenswrapper[3986]: I0318 08:47:48.949243 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949431 master-0 kubenswrapper[3986]: I0318 08:47:48.949324 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949431 master-0 kubenswrapper[3986]: I0318 08:47:48.949373 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949431 master-0 kubenswrapper[3986]: I0318 08:47:48.949421 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949648 master-0 kubenswrapper[3986]: I0318 08:47:48.949508 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949648 master-0 kubenswrapper[3986]: I0318 08:47:48.949583 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949648 master-0 kubenswrapper[3986]: I0318 08:47:48.949633 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949809 master-0 kubenswrapper[3986]: I0318 08:47:48.949675 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949809 master-0 kubenswrapper[3986]: I0318 08:47:48.949739 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.949809 master-0 kubenswrapper[3986]: I0318 08:47:48.949785 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.950017 master-0 kubenswrapper[3986]: I0318 08:47:48.949816 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.950017 master-0 kubenswrapper[3986]: I0318 08:47:48.949850 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:48.950017 master-0 kubenswrapper[3986]: I0318 08:47:48.949928 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpl2c\" (UniqueName: \"kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051148 master-0 kubenswrapper[3986]: I0318 08:47:49.051030 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051148 master-0 kubenswrapper[3986]: I0318 08:47:49.051118 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051499 master-0 kubenswrapper[3986]: I0318 08:47:49.051265 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051499 master-0 kubenswrapper[3986]: I0318 08:47:49.051365 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpl2c\" (UniqueName: \"kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051499 master-0 kubenswrapper[3986]: I0318 08:47:49.051411 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051499 master-0 kubenswrapper[3986]: I0318 08:47:49.051444 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051499 master-0 kubenswrapper[3986]: I0318 08:47:49.051476 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051786 master-0 kubenswrapper[3986]: I0318 08:47:49.051514 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051786 master-0 kubenswrapper[3986]: I0318 08:47:49.051559 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051786 master-0 kubenswrapper[3986]: I0318 08:47:49.051621 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051786 master-0 kubenswrapper[3986]: I0318 08:47:49.051657 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051786 master-0 kubenswrapper[3986]: I0318 08:47:49.051702 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.051786 master-0 kubenswrapper[3986]: I0318 08:47:49.051744 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052205 master-0 kubenswrapper[3986]: I0318 08:47:49.051985 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052205 master-0 kubenswrapper[3986]: I0318 08:47:49.052097 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052205 master-0 kubenswrapper[3986]: I0318 08:47:49.052155 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052387 master-0 kubenswrapper[3986]: I0318 08:47:49.052207 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052387 master-0 kubenswrapper[3986]: I0318 08:47:49.052254 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052387 master-0 kubenswrapper[3986]: I0318 08:47:49.052337 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052565 master-0 kubenswrapper[3986]: I0318 08:47:49.052486 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052565 master-0 kubenswrapper[3986]: I0318 08:47:49.052536 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052565 master-0 kubenswrapper[3986]: I0318 08:47:49.052527 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052745 master-0 kubenswrapper[3986]: I0318 08:47:49.052569 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052745 master-0 kubenswrapper[3986]: I0318 08:47:49.052613 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052745 master-0 kubenswrapper[3986]: I0318 08:47:49.052645 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052745 master-0 kubenswrapper[3986]: I0318 08:47:49.052671 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052745 master-0 kubenswrapper[3986]: I0318 08:47:49.052677 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052745 master-0 kubenswrapper[3986]: I0318 08:47:49.052630 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.052745 master-0 kubenswrapper[3986]: I0318 08:47:49.052478 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.053190 master-0 kubenswrapper[3986]: I0318 08:47:49.052724 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.053190 master-0 kubenswrapper[3986]: I0318 08:47:49.052944 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.054381 master-0 kubenswrapper[3986]: I0318 08:47:49.054327 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.054381 master-0 kubenswrapper[3986]: I0318 08:47:49.054358 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.082694 master-0 kubenswrapper[3986]: I0318 08:47:49.082639 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpl2c\" (UniqueName: \"kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.095052 master-0 kubenswrapper[3986]: I0318 08:47:49.094994 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=6.094974543 podStartE2EDuration="6.094974543s" podCreationTimestamp="2026-03-18 08:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:47:48.928054777 +0000 UTC m=+100.335224919" watchObservedRunningTime="2026-03-18 08:47:49.094974543 +0000 UTC m=+100.502144635" Mar 18 08:47:49.095777 master-0 kubenswrapper[3986]: I0318 08:47:49.095747 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xpzrz"] Mar 18 08:47:49.096471 master-0 kubenswrapper[3986]: I0318 08:47:49.096450 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.100000 master-0 kubenswrapper[3986]: I0318 08:47:49.099919 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 08:47:49.101402 master-0 kubenswrapper[3986]: I0318 08:47:49.101355 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 08:47:49.212663 master-0 kubenswrapper[3986]: I0318 08:47:49.212476 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bpf5c" Mar 18 08:47:49.254758 master-0 kubenswrapper[3986]: I0318 08:47:49.254680 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.255003 master-0 kubenswrapper[3986]: I0318 08:47:49.254748 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.255003 master-0 kubenswrapper[3986]: I0318 08:47:49.254811 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwg9\" (UniqueName: \"kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.255003 master-0 kubenswrapper[3986]: I0318 08:47:49.254880 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.255003 master-0 kubenswrapper[3986]: I0318 08:47:49.254963 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.255003 master-0 kubenswrapper[3986]: I0318 08:47:49.254997 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.255381 master-0 kubenswrapper[3986]: I0318 08:47:49.255037 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.255381 master-0 kubenswrapper[3986]: I0318 08:47:49.255080 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356153 master-0 kubenswrapper[3986]: I0318 08:47:49.356047 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356153 master-0 kubenswrapper[3986]: I0318 08:47:49.356126 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356442 master-0 kubenswrapper[3986]: I0318 08:47:49.356180 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwg9\" (UniqueName: \"kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356442 master-0 kubenswrapper[3986]: I0318 08:47:49.356230 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356442 master-0 kubenswrapper[3986]: I0318 08:47:49.356279 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356442 master-0 kubenswrapper[3986]: I0318 08:47:49.356349 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356773 master-0 kubenswrapper[3986]: I0318 08:47:49.356449 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356773 master-0 kubenswrapper[3986]: I0318 08:47:49.356454 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356773 master-0 kubenswrapper[3986]: I0318 08:47:49.356496 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356773 master-0 kubenswrapper[3986]: I0318 08:47:49.356570 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356773 master-0 kubenswrapper[3986]: I0318 08:47:49.356618 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.356773 master-0 kubenswrapper[3986]: I0318 08:47:49.356674 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.357753 master-0 kubenswrapper[3986]: I0318 08:47:49.357690 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.357753 master-0 kubenswrapper[3986]: I0318 08:47:49.357741 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.358808 master-0 kubenswrapper[3986]: I0318 08:47:49.358758 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.386486 master-0 kubenswrapper[3986]: I0318 08:47:49.386414 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwg9\" (UniqueName: \"kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.423128 master-0 kubenswrapper[3986]: I0318 08:47:49.423044 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:47:49.442013 master-0 kubenswrapper[3986]: W0318 08:47:49.441907 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9fa104a_4979_4023_8d7e_a965f11bc7db.slice/crio-0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb WatchSource:0}: Error finding container 0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb: Status 404 returned error can't find the container with id 0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb Mar 18 08:47:49.814026 master-0 kubenswrapper[3986]: I0318 08:47:49.813882 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bpf5c" event={"ID":"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4","Type":"ContainerStarted","Data":"c1e8680fcd730f22fac4464d7e2e919f0d68259c2072f7e2c075736c7c9f888d"} Mar 18 08:47:49.815975 master-0 kubenswrapper[3986]: I0318 08:47:49.815812 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" event={"ID":"f9fa104a-4979-4023-8d7e-a965f11bc7db","Type":"ContainerStarted","Data":"0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb"} Mar 18 08:47:49.880153 master-0 kubenswrapper[3986]: I0318 08:47:49.880065 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6x85n"] Mar 18 08:47:49.880827 master-0 kubenswrapper[3986]: I0318 08:47:49.880770 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:49.881066 master-0 kubenswrapper[3986]: E0318 08:47:49.881006 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:47:49.962301 master-0 kubenswrapper[3986]: I0318 08:47:49.962183 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:49.962301 master-0 kubenswrapper[3986]: I0318 08:47:49.962260 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zq8\" (UniqueName: \"kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:50.063320 master-0 kubenswrapper[3986]: I0318 08:47:50.063208 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:50.063320 master-0 kubenswrapper[3986]: I0318 08:47:50.063314 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6zq8\" (UniqueName: \"kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:50.063755 master-0 kubenswrapper[3986]: E0318 08:47:50.063677 3986 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:50.063909 master-0 kubenswrapper[3986]: E0318 08:47:50.063829 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:50.563791195 +0000 UTC m=+101.970961287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:50.097615 master-0 kubenswrapper[3986]: I0318 08:47:50.097436 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6zq8\" (UniqueName: \"kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:50.567204 master-0 kubenswrapper[3986]: I0318 08:47:50.567111 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:50.567429 master-0 kubenswrapper[3986]: E0318 08:47:50.567266 3986 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:50.567429 master-0 kubenswrapper[3986]: E0318 08:47:50.567334 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:51.567317105 +0000 UTC m=+102.974487187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:51.427379 master-0 kubenswrapper[3986]: I0318 08:47:51.427300 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:51.427878 master-0 kubenswrapper[3986]: E0318 08:47:51.427555 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:47:51.574762 master-0 kubenswrapper[3986]: I0318 08:47:51.574698 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:51.574953 master-0 kubenswrapper[3986]: E0318 08:47:51.574922 3986 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:51.575030 master-0 kubenswrapper[3986]: E0318 08:47:51.574998 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:53.574975639 +0000 UTC m=+104.982145721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:52.825656 master-0 kubenswrapper[3986]: I0318 08:47:52.825228 3986 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="adde235643fbff8c27e9f475aac6b49079f9d822aa89abb8fde8b8cfe9cfc68c" exitCode=0 Mar 18 08:47:52.825656 master-0 kubenswrapper[3986]: I0318 08:47:52.825295 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" event={"ID":"f9fa104a-4979-4023-8d7e-a965f11bc7db","Type":"ContainerDied","Data":"adde235643fbff8c27e9f475aac6b49079f9d822aa89abb8fde8b8cfe9cfc68c"} Mar 18 08:47:53.427972 master-0 kubenswrapper[3986]: I0318 08:47:53.426973 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:53.427972 master-0 kubenswrapper[3986]: E0318 08:47:53.427173 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:47:53.590209 master-0 kubenswrapper[3986]: I0318 08:47:53.590149 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:53.590579 master-0 kubenswrapper[3986]: E0318 08:47:53.590265 3986 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:53.590579 master-0 kubenswrapper[3986]: E0318 08:47:53.590322 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:57.59030684 +0000 UTC m=+108.997476922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:55.427351 master-0 kubenswrapper[3986]: I0318 08:47:55.427271 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:55.427878 master-0 kubenswrapper[3986]: E0318 08:47:55.427593 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:47:57.427562 master-0 kubenswrapper[3986]: I0318 08:47:57.427494 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:57.428073 master-0 kubenswrapper[3986]: E0318 08:47:57.427718 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:47:57.621390 master-0 kubenswrapper[3986]: I0318 08:47:57.621295 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:57.621578 master-0 kubenswrapper[3986]: E0318 08:47:57.621542 3986 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:57.621672 master-0 kubenswrapper[3986]: E0318 08:47:57.621644 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:05.621616057 +0000 UTC m=+117.028786159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:59.426870 master-0 kubenswrapper[3986]: I0318 08:47:59.426814 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:47:59.427534 master-0 kubenswrapper[3986]: E0318 08:47:59.427251 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:01.273762 master-0 kubenswrapper[3986]: I0318 08:48:01.271164 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7"] Mar 18 08:48:01.273762 master-0 kubenswrapper[3986]: I0318 08:48:01.271608 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.273762 master-0 kubenswrapper[3986]: I0318 08:48:01.272957 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 08:48:01.276004 master-0 kubenswrapper[3986]: I0318 08:48:01.274259 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 08:48:01.276004 master-0 kubenswrapper[3986]: I0318 08:48:01.274351 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 08:48:01.276004 master-0 kubenswrapper[3986]: I0318 08:48:01.274663 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 08:48:01.276004 master-0 kubenswrapper[3986]: I0318 08:48:01.275101 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 08:48:01.351158 master-0 kubenswrapper[3986]: I0318 08:48:01.351075 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.351158 master-0 kubenswrapper[3986]: I0318 08:48:01.351117 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.351158 master-0 kubenswrapper[3986]: I0318 08:48:01.351136 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.351158 master-0 kubenswrapper[3986]: I0318 08:48:01.351154 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glt6c\" (UniqueName: \"kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.427546 master-0 kubenswrapper[3986]: I0318 08:48:01.427496 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:01.427797 master-0 kubenswrapper[3986]: E0318 08:48:01.427646 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:01.452359 master-0 kubenswrapper[3986]: I0318 08:48:01.452041 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.452359 master-0 kubenswrapper[3986]: I0318 08:48:01.452090 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.452359 master-0 kubenswrapper[3986]: I0318 08:48:01.452293 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.452359 master-0 kubenswrapper[3986]: I0318 08:48:01.452377 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glt6c\" (UniqueName: \"kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.452765 master-0 kubenswrapper[3986]: I0318 08:48:01.452720 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.452765 master-0 kubenswrapper[3986]: I0318 08:48:01.452752 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.456869 master-0 kubenswrapper[3986]: I0318 08:48:01.456149 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.468435 master-0 kubenswrapper[3986]: I0318 08:48:01.468395 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glt6c\" (UniqueName: \"kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.491372 master-0 kubenswrapper[3986]: I0318 08:48:01.491317 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fk7vj"] Mar 18 08:48:01.495880 master-0 kubenswrapper[3986]: I0318 08:48:01.493071 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.496599 master-0 kubenswrapper[3986]: I0318 08:48:01.496555 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 08:48:01.496599 master-0 kubenswrapper[3986]: I0318 08:48:01.496587 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 08:48:01.552828 master-0 kubenswrapper[3986]: I0318 08:48:01.552693 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-log-socket\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.552828 master-0 kubenswrapper[3986]: I0318 08:48:01.552740 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-netd\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.552828 master-0 kubenswrapper[3986]: I0318 08:48:01.552758 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-ovn\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.552828 master-0 kubenswrapper[3986]: I0318 08:48:01.552777 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovn-node-metrics-cert\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553100 master-0 kubenswrapper[3986]: I0318 08:48:01.552891 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhf6l\" (UniqueName: \"kubernetes.io/projected/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-kube-api-access-dhf6l\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553100 master-0 kubenswrapper[3986]: I0318 08:48:01.552962 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-slash\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553100 master-0 kubenswrapper[3986]: I0318 08:48:01.552992 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-netns\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553100 master-0 kubenswrapper[3986]: I0318 08:48:01.553023 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-systemd\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553100 master-0 kubenswrapper[3986]: I0318 08:48:01.553054 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-bin\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553100 master-0 kubenswrapper[3986]: I0318 08:48:01.553085 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-kubelet\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553258 master-0 kubenswrapper[3986]: I0318 08:48:01.553112 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-etc-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553258 master-0 kubenswrapper[3986]: I0318 08:48:01.553141 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-ovn-kubernetes\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553258 master-0 kubenswrapper[3986]: I0318 08:48:01.553169 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553258 master-0 kubenswrapper[3986]: I0318 08:48:01.553210 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-systemd-units\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553258 master-0 kubenswrapper[3986]: I0318 08:48:01.553230 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-script-lib\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553389 master-0 kubenswrapper[3986]: I0318 08:48:01.553270 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-env-overrides\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553389 master-0 kubenswrapper[3986]: I0318 08:48:01.553292 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553389 master-0 kubenswrapper[3986]: I0318 08:48:01.553319 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-node-log\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553389 master-0 kubenswrapper[3986]: I0318 08:48:01.553340 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-config\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.553389 master-0 kubenswrapper[3986]: I0318 08:48:01.553375 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-var-lib-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.587975 master-0 kubenswrapper[3986]: I0318 08:48:01.587916 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:01.654165 master-0 kubenswrapper[3986]: I0318 08:48:01.654126 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654165 master-0 kubenswrapper[3986]: I0318 08:48:01.654164 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-node-log\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654183 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-config\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654207 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-var-lib-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654230 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-log-socket\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654243 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-netd\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654257 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-ovn\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654270 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovn-node-metrics-cert\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654283 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhf6l\" (UniqueName: \"kubernetes.io/projected/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-kube-api-access-dhf6l\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654297 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-slash\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654310 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-netns\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654325 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-systemd\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654337 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-bin\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654351 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-etc-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654364 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-ovn-kubernetes\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654371 master-0 kubenswrapper[3986]: I0318 08:48:01.654382 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654727 master-0 kubenswrapper[3986]: I0318 08:48:01.654398 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-kubelet\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654727 master-0 kubenswrapper[3986]: I0318 08:48:01.654412 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-script-lib\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654727 master-0 kubenswrapper[3986]: I0318 08:48:01.654434 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-systemd-units\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654727 master-0 kubenswrapper[3986]: I0318 08:48:01.654449 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-env-overrides\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.654992 master-0 kubenswrapper[3986]: I0318 08:48:01.654975 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-env-overrides\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655030 master-0 kubenswrapper[3986]: I0318 08:48:01.655022 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655065 master-0 kubenswrapper[3986]: I0318 08:48:01.655044 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-node-log\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655452 master-0 kubenswrapper[3986]: I0318 08:48:01.655423 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-config\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655490 master-0 kubenswrapper[3986]: I0318 08:48:01.655463 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-var-lib-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655490 master-0 kubenswrapper[3986]: I0318 08:48:01.655486 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-log-socket\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655543 master-0 kubenswrapper[3986]: I0318 08:48:01.655506 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-netd\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655543 master-0 kubenswrapper[3986]: I0318 08:48:01.655525 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-ovn\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655933 master-0 kubenswrapper[3986]: I0318 08:48:01.655898 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-etc-openvswitch\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.655988 master-0 kubenswrapper[3986]: I0318 08:48:01.655943 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-bin\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.656027 master-0 kubenswrapper[3986]: I0318 08:48:01.655999 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-ovn-kubernetes\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.656027 master-0 kubenswrapper[3986]: I0318 08:48:01.655999 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-kubelet\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.656027 master-0 kubenswrapper[3986]: I0318 08:48:01.656023 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-slash\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.656123 master-0 kubenswrapper[3986]: I0318 08:48:01.656021 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.656123 master-0 kubenswrapper[3986]: I0318 08:48:01.656059 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-systemd\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.656123 master-0 kubenswrapper[3986]: I0318 08:48:01.656065 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-netns\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.656123 master-0 kubenswrapper[3986]: I0318 08:48:01.656098 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-systemd-units\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.656464 master-0 kubenswrapper[3986]: I0318 08:48:01.656432 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-script-lib\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.660955 master-0 kubenswrapper[3986]: I0318 08:48:01.660896 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovn-node-metrics-cert\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.674473 master-0 kubenswrapper[3986]: I0318 08:48:01.674427 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhf6l\" (UniqueName: \"kubernetes.io/projected/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-kube-api-access-dhf6l\") pod \"ovnkube-node-fk7vj\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:01.805134 master-0 kubenswrapper[3986]: I0318 08:48:01.804996 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:02.107815 master-0 kubenswrapper[3986]: W0318 08:48:02.107054 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824d7ce9_e7bd_41ba_b7b1_1811e0f0dec4.slice/crio-6f7e091cc6956264f5530fa4606adc44124201440fb69d366bae9e4dd97d842f WatchSource:0}: Error finding container 6f7e091cc6956264f5530fa4606adc44124201440fb69d366bae9e4dd97d842f: Status 404 returned error can't find the container with id 6f7e091cc6956264f5530fa4606adc44124201440fb69d366bae9e4dd97d842f Mar 18 08:48:02.852515 master-0 kubenswrapper[3986]: I0318 08:48:02.852448 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" event={"ID":"edc7f629-4288-443b-aa8e-78bc6a09c848","Type":"ContainerStarted","Data":"deb08914ec0d0cb0779c0f0c1a5ed4f3ff3d9143ed5a1430602f4f05e65bd6ab"} Mar 18 08:48:02.852515 master-0 kubenswrapper[3986]: I0318 08:48:02.852517 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" event={"ID":"edc7f629-4288-443b-aa8e-78bc6a09c848","Type":"ContainerStarted","Data":"00b7669c60621e059b9f2a3185ba93db56934e35fa8fa0713c09f3decdea9378"} Mar 18 08:48:02.855702 master-0 kubenswrapper[3986]: I0318 08:48:02.855673 3986 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="087da5f6d44511af7f32a791cdbe22a09cb7c15552db037f0bacb605d9163341" exitCode=0 Mar 18 08:48:02.855775 master-0 kubenswrapper[3986]: I0318 08:48:02.855734 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" event={"ID":"f9fa104a-4979-4023-8d7e-a965f11bc7db","Type":"ContainerDied","Data":"087da5f6d44511af7f32a791cdbe22a09cb7c15552db037f0bacb605d9163341"} Mar 18 08:48:02.859675 master-0 kubenswrapper[3986]: I0318 08:48:02.859631 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bpf5c" event={"ID":"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4","Type":"ContainerStarted","Data":"dffbc077b9012473b99f55dd5d5bcdcebb01d303243b874995fd32950ae95c5a"} Mar 18 08:48:02.862237 master-0 kubenswrapper[3986]: I0318 08:48:02.862194 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"6f7e091cc6956264f5530fa4606adc44124201440fb69d366bae9e4dd97d842f"} Mar 18 08:48:02.889572 master-0 kubenswrapper[3986]: I0318 08:48:02.889502 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bpf5c" podStartSLOduration=1.941474827 podStartE2EDuration="14.889483624s" podCreationTimestamp="2026-03-18 08:47:48 +0000 UTC" firstStartedPulling="2026-03-18 08:47:49.233268165 +0000 UTC m=+100.640438287" lastFinishedPulling="2026-03-18 08:48:02.181277002 +0000 UTC m=+113.588447084" observedRunningTime="2026-03-18 08:48:02.889048523 +0000 UTC m=+114.296218605" watchObservedRunningTime="2026-03-18 08:48:02.889483624 +0000 UTC m=+114.296653696" Mar 18 08:48:03.166035 master-0 kubenswrapper[3986]: I0318 08:48:03.165930 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:03.166231 master-0 kubenswrapper[3986]: E0318 08:48:03.166095 3986 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:03.166231 master-0 kubenswrapper[3986]: E0318 08:48:03.166160 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:35.16614121 +0000 UTC m=+146.573311302 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:03.427053 master-0 kubenswrapper[3986]: I0318 08:48:03.426939 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:03.427213 master-0 kubenswrapper[3986]: E0318 08:48:03.427062 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:05.406824 master-0 kubenswrapper[3986]: I0318 08:48:05.406429 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-8b7l7"] Mar 18 08:48:05.407342 master-0 kubenswrapper[3986]: I0318 08:48:05.407319 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:05.409302 master-0 kubenswrapper[3986]: E0318 08:48:05.407456 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:05.427444 master-0 kubenswrapper[3986]: I0318 08:48:05.427387 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:05.427598 master-0 kubenswrapper[3986]: E0318 08:48:05.427565 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:05.486777 master-0 kubenswrapper[3986]: I0318 08:48:05.486702 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:05.587558 master-0 kubenswrapper[3986]: I0318 08:48:05.587472 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:05.689662 master-0 kubenswrapper[3986]: I0318 08:48:05.688253 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:05.689662 master-0 kubenswrapper[3986]: E0318 08:48:05.688413 3986 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:48:05.689662 master-0 kubenswrapper[3986]: E0318 08:48:05.688477 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:21.688462298 +0000 UTC m=+133.095632380 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:48:05.689662 master-0 kubenswrapper[3986]: E0318 08:48:05.689403 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:48:05.689662 master-0 kubenswrapper[3986]: E0318 08:48:05.689425 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:48:05.689662 master-0 kubenswrapper[3986]: E0318 08:48:05.689439 3986 projected.go:194] Error preparing data for projected volume kube-api-access-l7lrl for pod openshift-network-diagnostics/network-check-target-8b7l7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:05.689662 master-0 kubenswrapper[3986]: E0318 08:48:05.689492 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl podName:fc289a83-9a2e-404b-b148-605639362703 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:06.189477384 +0000 UTC m=+117.596647476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7lrl" (UniqueName: "kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl") pod "network-check-target-8b7l7" (UID: "fc289a83-9a2e-404b-b148-605639362703") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:06.192232 master-0 kubenswrapper[3986]: I0318 08:48:06.192152 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:06.192453 master-0 kubenswrapper[3986]: E0318 08:48:06.192331 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:48:06.192453 master-0 kubenswrapper[3986]: E0318 08:48:06.192354 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:48:06.192453 master-0 kubenswrapper[3986]: E0318 08:48:06.192366 3986 projected.go:194] Error preparing data for projected volume kube-api-access-l7lrl for pod openshift-network-diagnostics/network-check-target-8b7l7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:06.192453 master-0 kubenswrapper[3986]: E0318 08:48:06.192416 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl podName:fc289a83-9a2e-404b-b148-605639362703 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:07.192399807 +0000 UTC m=+118.599569889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7lrl" (UniqueName: "kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl") pod "network-check-target-8b7l7" (UID: "fc289a83-9a2e-404b-b148-605639362703") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:06.873628 master-0 kubenswrapper[3986]: I0318 08:48:06.873576 3986 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="4e7c826e1670b530a9fd33f7eb549f98d247eb166d6206beef67f781b2a470af" exitCode=0 Mar 18 08:48:06.873628 master-0 kubenswrapper[3986]: I0318 08:48:06.873620 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" event={"ID":"f9fa104a-4979-4023-8d7e-a965f11bc7db","Type":"ContainerDied","Data":"4e7c826e1670b530a9fd33f7eb549f98d247eb166d6206beef67f781b2a470af"} Mar 18 08:48:07.069262 master-0 kubenswrapper[3986]: I0318 08:48:07.069140 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-n5vqx"] Mar 18 08:48:07.069655 master-0 kubenswrapper[3986]: I0318 08:48:07.069627 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.073443 master-0 kubenswrapper[3986]: I0318 08:48:07.072467 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 08:48:07.073443 master-0 kubenswrapper[3986]: I0318 08:48:07.072541 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 08:48:07.073443 master-0 kubenswrapper[3986]: I0318 08:48:07.072471 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 08:48:07.077320 master-0 kubenswrapper[3986]: I0318 08:48:07.077245 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 08:48:07.083996 master-0 kubenswrapper[3986]: I0318 08:48:07.078995 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 08:48:07.201805 master-0 kubenswrapper[3986]: I0318 08:48:07.201481 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.201805 master-0 kubenswrapper[3986]: I0318 08:48:07.201534 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.201805 master-0 kubenswrapper[3986]: I0318 08:48:07.201555 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.201805 master-0 kubenswrapper[3986]: I0318 08:48:07.201619 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:07.201805 master-0 kubenswrapper[3986]: I0318 08:48:07.201638 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9w7l\" (UniqueName: \"kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.201805 master-0 kubenswrapper[3986]: E0318 08:48:07.201781 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:48:07.202127 master-0 kubenswrapper[3986]: E0318 08:48:07.201810 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:48:07.202127 master-0 kubenswrapper[3986]: E0318 08:48:07.201824 3986 projected.go:194] Error preparing data for projected volume kube-api-access-l7lrl for pod openshift-network-diagnostics/network-check-target-8b7l7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:07.202526 master-0 kubenswrapper[3986]: E0318 08:48:07.202257 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl podName:fc289a83-9a2e-404b-b148-605639362703 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:09.202241777 +0000 UTC m=+120.609411859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7lrl" (UniqueName: "kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl") pod "network-check-target-8b7l7" (UID: "fc289a83-9a2e-404b-b148-605639362703") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:07.302962 master-0 kubenswrapper[3986]: I0318 08:48:07.302886 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.302962 master-0 kubenswrapper[3986]: I0318 08:48:07.302952 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.303158 master-0 kubenswrapper[3986]: I0318 08:48:07.302977 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.303158 master-0 kubenswrapper[3986]: I0318 08:48:07.303034 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9w7l\" (UniqueName: \"kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.304082 master-0 kubenswrapper[3986]: I0318 08:48:07.304042 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.304377 master-0 kubenswrapper[3986]: I0318 08:48:07.304279 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.308648 master-0 kubenswrapper[3986]: I0318 08:48:07.308610 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.321925 master-0 kubenswrapper[3986]: I0318 08:48:07.321873 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9w7l\" (UniqueName: \"kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.390546 master-0 kubenswrapper[3986]: I0318 08:48:07.389915 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:07.407962 master-0 kubenswrapper[3986]: W0318 08:48:07.407330 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d633c5_e0aa_4fb6_83e0_a2e976334406.slice/crio-b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0 WatchSource:0}: Error finding container b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0: Status 404 returned error can't find the container with id b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0 Mar 18 08:48:07.427993 master-0 kubenswrapper[3986]: I0318 08:48:07.427595 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:07.427993 master-0 kubenswrapper[3986]: E0318 08:48:07.427741 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:07.427993 master-0 kubenswrapper[3986]: I0318 08:48:07.427793 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:07.427993 master-0 kubenswrapper[3986]: E0318 08:48:07.427885 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:07.876901 master-0 kubenswrapper[3986]: I0318 08:48:07.876834 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-n5vqx" event={"ID":"16d633c5-e0aa-4fb6-83e0-a2e976334406","Type":"ContainerStarted","Data":"b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0"} Mar 18 08:48:08.882936 master-0 kubenswrapper[3986]: I0318 08:48:08.882886 3986 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="2d0a2c2dc41ce3fdaa0eb263dbdcc431c85c8b6b65a032320a020b41e4119800" exitCode=0 Mar 18 08:48:08.882936 master-0 kubenswrapper[3986]: I0318 08:48:08.882928 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" event={"ID":"f9fa104a-4979-4023-8d7e-a965f11bc7db","Type":"ContainerDied","Data":"2d0a2c2dc41ce3fdaa0eb263dbdcc431c85c8b6b65a032320a020b41e4119800"} Mar 18 08:48:09.221138 master-0 kubenswrapper[3986]: I0318 08:48:09.220307 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:09.221138 master-0 kubenswrapper[3986]: E0318 08:48:09.220483 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:48:09.221138 master-0 kubenswrapper[3986]: E0318 08:48:09.220501 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:48:09.221138 master-0 kubenswrapper[3986]: E0318 08:48:09.220512 3986 projected.go:194] Error preparing data for projected volume kube-api-access-l7lrl for pod openshift-network-diagnostics/network-check-target-8b7l7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:09.221138 master-0 kubenswrapper[3986]: E0318 08:48:09.220560 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl podName:fc289a83-9a2e-404b-b148-605639362703 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:13.220545574 +0000 UTC m=+124.627715656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7lrl" (UniqueName: "kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl") pod "network-check-target-8b7l7" (UID: "fc289a83-9a2e-404b-b148-605639362703") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:09.598558 master-0 kubenswrapper[3986]: I0318 08:48:09.598261 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:09.599009 master-0 kubenswrapper[3986]: E0318 08:48:09.598845 3986 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 18 08:48:09.599406 master-0 kubenswrapper[3986]: E0318 08:48:09.599042 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:09.599406 master-0 kubenswrapper[3986]: I0318 08:48:09.599141 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:09.599406 master-0 kubenswrapper[3986]: E0318 08:48:09.599267 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:11.428712 master-0 kubenswrapper[3986]: I0318 08:48:11.427150 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:11.428712 master-0 kubenswrapper[3986]: I0318 08:48:11.427210 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:11.428712 master-0 kubenswrapper[3986]: E0318 08:48:11.427295 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:11.428712 master-0 kubenswrapper[3986]: E0318 08:48:11.427396 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:13.231607 master-0 kubenswrapper[3986]: I0318 08:48:13.231491 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:13.232422 master-0 kubenswrapper[3986]: E0318 08:48:13.231634 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:48:13.232422 master-0 kubenswrapper[3986]: E0318 08:48:13.231652 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:48:13.232422 master-0 kubenswrapper[3986]: E0318 08:48:13.231662 3986 projected.go:194] Error preparing data for projected volume kube-api-access-l7lrl for pod openshift-network-diagnostics/network-check-target-8b7l7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:13.232422 master-0 kubenswrapper[3986]: E0318 08:48:13.231710 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl podName:fc289a83-9a2e-404b-b148-605639362703 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:21.231697379 +0000 UTC m=+132.638867461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7lrl" (UniqueName: "kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl") pod "network-check-target-8b7l7" (UID: "fc289a83-9a2e-404b-b148-605639362703") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:13.430290 master-0 kubenswrapper[3986]: I0318 08:48:13.430230 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:13.430467 master-0 kubenswrapper[3986]: E0318 08:48:13.430362 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:13.430690 master-0 kubenswrapper[3986]: I0318 08:48:13.430636 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:13.430840 master-0 kubenswrapper[3986]: E0318 08:48:13.430801 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:14.599933 master-0 kubenswrapper[3986]: E0318 08:48:14.599837 3986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 08:48:15.426901 master-0 kubenswrapper[3986]: I0318 08:48:15.426834 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:15.426901 master-0 kubenswrapper[3986]: I0318 08:48:15.426900 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:15.427320 master-0 kubenswrapper[3986]: E0318 08:48:15.427000 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:15.427676 master-0 kubenswrapper[3986]: E0318 08:48:15.427589 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:17.427654 master-0 kubenswrapper[3986]: I0318 08:48:17.427548 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:17.428912 master-0 kubenswrapper[3986]: I0318 08:48:17.427700 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:17.428912 master-0 kubenswrapper[3986]: E0318 08:48:17.427706 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:17.428912 master-0 kubenswrapper[3986]: E0318 08:48:17.428079 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:19.427298 master-0 kubenswrapper[3986]: I0318 08:48:19.427244 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:19.427298 master-0 kubenswrapper[3986]: I0318 08:48:19.427244 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:19.427901 master-0 kubenswrapper[3986]: E0318 08:48:19.427876 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:19.428111 master-0 kubenswrapper[3986]: E0318 08:48:19.428014 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:19.600891 master-0 kubenswrapper[3986]: E0318 08:48:19.600591 3986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 08:48:21.307360 master-0 kubenswrapper[3986]: I0318 08:48:21.307313 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:21.307843 master-0 kubenswrapper[3986]: E0318 08:48:21.307493 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:48:21.307843 master-0 kubenswrapper[3986]: E0318 08:48:21.307510 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:48:21.307843 master-0 kubenswrapper[3986]: E0318 08:48:21.307520 3986 projected.go:194] Error preparing data for projected volume kube-api-access-l7lrl for pod openshift-network-diagnostics/network-check-target-8b7l7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:21.307843 master-0 kubenswrapper[3986]: E0318 08:48:21.307599 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl podName:fc289a83-9a2e-404b-b148-605639362703 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.307585181 +0000 UTC m=+148.714755263 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7lrl" (UniqueName: "kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl") pod "network-check-target-8b7l7" (UID: "fc289a83-9a2e-404b-b148-605639362703") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:21.426776 master-0 kubenswrapper[3986]: I0318 08:48:21.426711 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:21.426776 master-0 kubenswrapper[3986]: I0318 08:48:21.426763 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:21.427035 master-0 kubenswrapper[3986]: E0318 08:48:21.426886 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:21.427035 master-0 kubenswrapper[3986]: E0318 08:48:21.426965 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:21.710416 master-0 kubenswrapper[3986]: I0318 08:48:21.710283 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:21.710416 master-0 kubenswrapper[3986]: E0318 08:48:21.710346 3986 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:48:21.710595 master-0 kubenswrapper[3986]: E0318 08:48:21.710436 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.710419971 +0000 UTC m=+165.117590053 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:48:22.932542 master-0 kubenswrapper[3986]: I0318 08:48:22.932416 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" event={"ID":"edc7f629-4288-443b-aa8e-78bc6a09c848","Type":"ContainerStarted","Data":"2816dd0a3b2639d48151bf75dfb86759dbb1c466295c4e9c83f4f4ac853eb6f8"} Mar 18 08:48:22.939592 master-0 kubenswrapper[3986]: I0318 08:48:22.939502 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-n5vqx" event={"ID":"16d633c5-e0aa-4fb6-83e0-a2e976334406","Type":"ContainerStarted","Data":"9d4723f8591cc64ff0653aec9e9efb152a03ef27364e5787d1d3d8ff7d6020e4"} Mar 18 08:48:22.939592 master-0 kubenswrapper[3986]: I0318 08:48:22.939552 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-n5vqx" event={"ID":"16d633c5-e0aa-4fb6-83e0-a2e976334406","Type":"ContainerStarted","Data":"e737f1bc7f7696082917f9ab0937f75fe99ddfdff924ef1fafe8d8c57401d526"} Mar 18 08:48:22.944988 master-0 kubenswrapper[3986]: I0318 08:48:22.944161 3986 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="5ff838c2d5ef301a4d391cdf94caa10d8ed9cf1ecae148154167ecb368e38ae1" exitCode=0 Mar 18 08:48:22.944988 master-0 kubenswrapper[3986]: I0318 08:48:22.944231 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" event={"ID":"f9fa104a-4979-4023-8d7e-a965f11bc7db","Type":"ContainerDied","Data":"5ff838c2d5ef301a4d391cdf94caa10d8ed9cf1ecae148154167ecb368e38ae1"} Mar 18 08:48:22.947120 master-0 kubenswrapper[3986]: I0318 08:48:22.947013 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8" exitCode=0 Mar 18 08:48:22.947120 master-0 kubenswrapper[3986]: I0318 08:48:22.947079 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} Mar 18 08:48:22.992319 master-0 kubenswrapper[3986]: I0318 08:48:22.992178 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" podStartSLOduration=1.870985476 podStartE2EDuration="21.992153696s" podCreationTimestamp="2026-03-18 08:48:01 +0000 UTC" firstStartedPulling="2026-03-18 08:48:02.288209319 +0000 UTC m=+113.695379401" lastFinishedPulling="2026-03-18 08:48:22.409377539 +0000 UTC m=+133.816547621" observedRunningTime="2026-03-18 08:48:22.95046323 +0000 UTC m=+134.357633362" watchObservedRunningTime="2026-03-18 08:48:22.992153696 +0000 UTC m=+134.399323788" Mar 18 08:48:23.060000 master-0 kubenswrapper[3986]: I0318 08:48:23.056820 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-n5vqx" podStartSLOduration=1.090562581 podStartE2EDuration="16.056786393s" podCreationTimestamp="2026-03-18 08:48:07 +0000 UTC" firstStartedPulling="2026-03-18 08:48:07.410696646 +0000 UTC m=+118.817866728" lastFinishedPulling="2026-03-18 08:48:22.376920458 +0000 UTC m=+133.784090540" observedRunningTime="2026-03-18 08:48:23.015487667 +0000 UTC m=+134.422657779" watchObservedRunningTime="2026-03-18 08:48:23.056786393 +0000 UTC m=+134.463956555" Mar 18 08:48:23.431068 master-0 kubenswrapper[3986]: I0318 08:48:23.429004 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:23.431068 master-0 kubenswrapper[3986]: I0318 08:48:23.429124 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:23.431068 master-0 kubenswrapper[3986]: E0318 08:48:23.429464 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:23.431068 master-0 kubenswrapper[3986]: E0318 08:48:23.429674 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:23.954723 master-0 kubenswrapper[3986]: I0318 08:48:23.954572 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} Mar 18 08:48:23.954723 master-0 kubenswrapper[3986]: I0318 08:48:23.954644 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} Mar 18 08:48:23.954723 master-0 kubenswrapper[3986]: I0318 08:48:23.954667 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} Mar 18 08:48:23.954723 master-0 kubenswrapper[3986]: I0318 08:48:23.954686 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} Mar 18 08:48:23.954723 master-0 kubenswrapper[3986]: I0318 08:48:23.954706 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} Mar 18 08:48:23.954723 master-0 kubenswrapper[3986]: I0318 08:48:23.954724 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} Mar 18 08:48:23.962084 master-0 kubenswrapper[3986]: I0318 08:48:23.962030 3986 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="b90404fea2dcee705335febe9902c2cb9057e6f3ac0a9b235a9e5ecb1660d666" exitCode=0 Mar 18 08:48:23.962220 master-0 kubenswrapper[3986]: I0318 08:48:23.962145 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" event={"ID":"f9fa104a-4979-4023-8d7e-a965f11bc7db","Type":"ContainerDied","Data":"b90404fea2dcee705335febe9902c2cb9057e6f3ac0a9b235a9e5ecb1660d666"} Mar 18 08:48:24.602253 master-0 kubenswrapper[3986]: E0318 08:48:24.602108 3986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 08:48:24.974357 master-0 kubenswrapper[3986]: I0318 08:48:24.974168 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" event={"ID":"f9fa104a-4979-4023-8d7e-a965f11bc7db","Type":"ContainerStarted","Data":"b6e3866f1001a63156a21823f4a82e5ce3ca16405a91bcc53f64f43e52ae1f91"} Mar 18 08:48:25.427204 master-0 kubenswrapper[3986]: I0318 08:48:25.427130 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:25.427461 master-0 kubenswrapper[3986]: I0318 08:48:25.427208 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:25.427461 master-0 kubenswrapper[3986]: E0318 08:48:25.427307 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:25.427575 master-0 kubenswrapper[3986]: E0318 08:48:25.427539 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:25.445736 master-0 kubenswrapper[3986]: I0318 08:48:25.445639 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xpzrz" podStartSLOduration=3.586686728 podStartE2EDuration="36.445608321s" podCreationTimestamp="2026-03-18 08:47:49 +0000 UTC" firstStartedPulling="2026-03-18 08:47:49.445400266 +0000 UTC m=+100.852570388" lastFinishedPulling="2026-03-18 08:48:22.304321889 +0000 UTC m=+133.711491981" observedRunningTime="2026-03-18 08:48:25.006591654 +0000 UTC m=+136.413761816" watchObservedRunningTime="2026-03-18 08:48:25.445608321 +0000 UTC m=+136.852778403" Mar 18 08:48:25.446505 master-0 kubenswrapper[3986]: I0318 08:48:25.446463 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 18 08:48:25.982381 master-0 kubenswrapper[3986]: I0318 08:48:25.982332 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} Mar 18 08:48:27.427570 master-0 kubenswrapper[3986]: I0318 08:48:27.427469 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:27.428188 master-0 kubenswrapper[3986]: I0318 08:48:27.427491 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:27.428188 master-0 kubenswrapper[3986]: E0318 08:48:27.427685 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:27.428188 master-0 kubenswrapper[3986]: E0318 08:48:27.427818 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:28.999161 master-0 kubenswrapper[3986]: I0318 08:48:28.997964 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerStarted","Data":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} Mar 18 08:48:28.999161 master-0 kubenswrapper[3986]: I0318 08:48:28.998394 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:28.999161 master-0 kubenswrapper[3986]: I0318 08:48:28.998426 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:29.033744 master-0 kubenswrapper[3986]: I0318 08:48:29.033673 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:29.246905 master-0 kubenswrapper[3986]: I0318 08:48:29.243837 3986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fk7vj"] Mar 18 08:48:29.427537 master-0 kubenswrapper[3986]: I0318 08:48:29.427264 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:29.427537 master-0 kubenswrapper[3986]: E0318 08:48:29.427496 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:29.427814 master-0 kubenswrapper[3986]: I0318 08:48:29.427766 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:29.427918 master-0 kubenswrapper[3986]: E0318 08:48:29.427896 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:29.603130 master-0 kubenswrapper[3986]: E0318 08:48:29.603026 3986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 08:48:29.754402 master-0 kubenswrapper[3986]: I0318 08:48:29.754317 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podStartSLOduration=8.523781549 podStartE2EDuration="28.754299667s" podCreationTimestamp="2026-03-18 08:48:01 +0000 UTC" firstStartedPulling="2026-03-18 08:48:02.110431839 +0000 UTC m=+113.517601921" lastFinishedPulling="2026-03-18 08:48:22.340949957 +0000 UTC m=+133.748120039" observedRunningTime="2026-03-18 08:48:29.56495897 +0000 UTC m=+140.972129122" watchObservedRunningTime="2026-03-18 08:48:29.754299667 +0000 UTC m=+141.161469759" Mar 18 08:48:29.798424 master-0 kubenswrapper[3986]: I0318 08:48:29.798318 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=4.798296979 podStartE2EDuration="4.798296979s" podCreationTimestamp="2026-03-18 08:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:29.755582705 +0000 UTC m=+141.162752787" watchObservedRunningTime="2026-03-18 08:48:29.798296979 +0000 UTC m=+141.205467061" Mar 18 08:48:30.001940 master-0 kubenswrapper[3986]: I0318 08:48:30.001593 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:30.026134 master-0 kubenswrapper[3986]: I0318 08:48:30.026047 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:30.943219 master-0 kubenswrapper[3986]: I0318 08:48:30.943174 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-8b7l7"] Mar 18 08:48:30.943416 master-0 kubenswrapper[3986]: I0318 08:48:30.943295 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:30.943416 master-0 kubenswrapper[3986]: E0318 08:48:30.943381 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:30.945812 master-0 kubenswrapper[3986]: I0318 08:48:30.945783 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6x85n"] Mar 18 08:48:30.945931 master-0 kubenswrapper[3986]: I0318 08:48:30.945886 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:30.945998 master-0 kubenswrapper[3986]: E0318 08:48:30.945971 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:31.004143 master-0 kubenswrapper[3986]: I0318 08:48:31.004071 3986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="northd" containerID="cri-o://33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" gracePeriod=30 Mar 18 08:48:31.004625 master-0 kubenswrapper[3986]: I0318 08:48:31.004151 3986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" gracePeriod=30 Mar 18 08:48:31.004625 master-0 kubenswrapper[3986]: I0318 08:48:31.004269 3986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kube-rbac-proxy-node" containerID="cri-o://92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" gracePeriod=30 Mar 18 08:48:31.004625 master-0 kubenswrapper[3986]: I0318 08:48:31.004286 3986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="nbdb" containerID="cri-o://8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" gracePeriod=30 Mar 18 08:48:31.004625 master-0 kubenswrapper[3986]: I0318 08:48:31.004274 3986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="sbdb" containerID="cri-o://8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" gracePeriod=30 Mar 18 08:48:31.004625 master-0 kubenswrapper[3986]: I0318 08:48:31.004052 3986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovn-controller" containerID="cri-o://78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac" gracePeriod=30 Mar 18 08:48:31.004625 master-0 kubenswrapper[3986]: I0318 08:48:31.004334 3986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovn-acl-logging" containerID="cri-o://d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db" gracePeriod=30 Mar 18 08:48:31.039456 master-0 kubenswrapper[3986]: I0318 08:48:31.039394 3986 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovnkube-controller" containerID="cri-o://159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" gracePeriod=30 Mar 18 08:48:31.776122 master-0 kubenswrapper[3986]: I0318 08:48:31.775646 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/ovnkube-controller/0.log" Mar 18 08:48:31.779323 master-0 kubenswrapper[3986]: I0318 08:48:31.779258 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/kube-rbac-proxy-ovn-metrics/0.log" Mar 18 08:48:31.780278 master-0 kubenswrapper[3986]: I0318 08:48:31.780216 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/kube-rbac-proxy-node/0.log" Mar 18 08:48:31.781001 master-0 kubenswrapper[3986]: I0318 08:48:31.780950 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/ovn-acl-logging/0.log" Mar 18 08:48:31.781812 master-0 kubenswrapper[3986]: I0318 08:48:31.781760 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/ovn-controller/0.log" Mar 18 08:48:31.782480 master-0 kubenswrapper[3986]: I0318 08:48:31.782430 3986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:31.808291 master-0 kubenswrapper[3986]: I0318 08:48:31.808226 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhf6l\" (UniqueName: \"kubernetes.io/projected/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-kube-api-access-dhf6l\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808409 master-0 kubenswrapper[3986]: I0318 08:48:31.808290 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-config\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808409 master-0 kubenswrapper[3986]: I0318 08:48:31.808331 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-systemd\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808409 master-0 kubenswrapper[3986]: I0318 08:48:31.808364 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-bin\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808409 master-0 kubenswrapper[3986]: I0318 08:48:31.808396 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-env-overrides\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808663 master-0 kubenswrapper[3986]: I0318 08:48:31.808431 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-kubelet\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808663 master-0 kubenswrapper[3986]: I0318 08:48:31.808468 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-etc-openvswitch\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808663 master-0 kubenswrapper[3986]: I0318 08:48:31.808499 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-openvswitch\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808663 master-0 kubenswrapper[3986]: I0318 08:48:31.808532 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-log-socket\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808663 master-0 kubenswrapper[3986]: I0318 08:48:31.808564 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-ovn-kubernetes\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808663 master-0 kubenswrapper[3986]: I0318 08:48:31.808594 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-slash\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808663 master-0 kubenswrapper[3986]: I0318 08:48:31.808622 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-netns\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.808663 master-0 kubenswrapper[3986]: I0318 08:48:31.808649 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-node-log\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.808769 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-script-lib\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.808893 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovn-node-metrics-cert\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.808927 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-ovn\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.808961 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-netd\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.808995 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.809028 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-var-lib-openvswitch\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.809056 3986 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-systemd-units\") pod \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\" (UID: \"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4\") " Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.809075 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-log-socket" (OuterVolumeSpecName: "log-socket") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.809252 master-0 kubenswrapper[3986]: I0318 08:48:31.809189 3986 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.809938 master-0 kubenswrapper[3986]: I0318 08:48:31.809272 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.809938 master-0 kubenswrapper[3986]: I0318 08:48:31.809321 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.809938 master-0 kubenswrapper[3986]: I0318 08:48:31.809358 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-slash" (OuterVolumeSpecName: "host-slash") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.809938 master-0 kubenswrapper[3986]: I0318 08:48:31.809392 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.809938 master-0 kubenswrapper[3986]: I0318 08:48:31.809425 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-node-log" (OuterVolumeSpecName: "node-log") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.810339 master-0 kubenswrapper[3986]: I0318 08:48:31.810197 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:31.810609 master-0 kubenswrapper[3986]: I0318 08:48:31.810534 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.810609 master-0 kubenswrapper[3986]: I0318 08:48:31.810565 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.810821 master-0 kubenswrapper[3986]: I0318 08:48:31.810648 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.810821 master-0 kubenswrapper[3986]: I0318 08:48:31.810661 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.810821 master-0 kubenswrapper[3986]: I0318 08:48:31.810691 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.810821 master-0 kubenswrapper[3986]: I0318 08:48:31.810725 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.810821 master-0 kubenswrapper[3986]: I0318 08:48:31.810723 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:31.810821 master-0 kubenswrapper[3986]: I0318 08:48:31.810740 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.810821 master-0 kubenswrapper[3986]: I0318 08:48:31.810797 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.811334 master-0 kubenswrapper[3986]: I0318 08:48:31.811189 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:31.816768 master-0 kubenswrapper[3986]: I0318 08:48:31.816670 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-kube-api-access-dhf6l" (OuterVolumeSpecName: "kube-api-access-dhf6l") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "kube-api-access-dhf6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:48:31.817112 master-0 kubenswrapper[3986]: I0318 08:48:31.817038 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:48:31.819476 master-0 kubenswrapper[3986]: I0318 08:48:31.819404 3986 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" (UID: "824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:31.869634 master-0 kubenswrapper[3986]: I0318 08:48:31.869575 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cxws9"] Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: E0318 08:48:31.869721 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="northd" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: I0318 08:48:31.869743 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="northd" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: E0318 08:48:31.869758 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovnkube-controller" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: I0318 08:48:31.869771 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovnkube-controller" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: E0318 08:48:31.869785 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovn-controller" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: I0318 08:48:31.869797 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovn-controller" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: E0318 08:48:31.869811 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovn-acl-logging" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: I0318 08:48:31.869823 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovn-acl-logging" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: E0318 08:48:31.869837 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kube-rbac-proxy-node" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: I0318 08:48:31.869880 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kube-rbac-proxy-node" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: E0318 08:48:31.869895 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="nbdb" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: I0318 08:48:31.869907 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="nbdb" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: E0318 08:48:31.869921 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kubecfg-setup" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: I0318 08:48:31.869933 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kubecfg-setup" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: E0318 08:48:31.869949 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 08:48:31.869925 master-0 kubenswrapper[3986]: I0318 08:48:31.869962 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: E0318 08:48:31.869976 3986 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="sbdb" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.869988 3986 state_mem.go:107] "Deleted CPUSet assignment" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="sbdb" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.870044 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="sbdb" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.870061 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovnkube-controller" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.870074 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="nbdb" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.870089 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovn-acl-logging" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.870101 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.870114 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="ovn-controller" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.870127 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="kube-rbac-proxy-node" Mar 18 08:48:31.870819 master-0 kubenswrapper[3986]: I0318 08:48:31.870139 3986 memory_manager.go:354] "RemoveStaleState removing state" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerName="northd" Mar 18 08:48:31.871381 master-0 kubenswrapper[3986]: I0318 08:48:31.871194 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.909527 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.909650 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.909709 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.909758 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.909894 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.909971 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910025 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910075 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910127 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj9fr\" (UniqueName: \"kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910181 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910273 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910322 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910394 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910447 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910518 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.912013 master-0 kubenswrapper[3986]: I0318 08:48:31.910565 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.910625 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.910683 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.910729 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.910907 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911013 3986 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911049 3986 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911070 3986 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911094 3986 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911112 3986 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911133 3986 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911155 3986 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911172 3986 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhf6l\" (UniqueName: \"kubernetes.io/projected/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-kube-api-access-dhf6l\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911188 3986 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911205 3986 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911222 3986 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911240 3986 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911263 3986 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911289 3986 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911309 3986 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911328 3986 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911350 3986 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911421 3986 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:31.913372 master-0 kubenswrapper[3986]: I0318 08:48:31.911446 3986 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4-node-log\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:32.009952 master-0 kubenswrapper[3986]: I0318 08:48:32.009814 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/ovnkube-controller/0.log" Mar 18 08:48:32.011711 master-0 kubenswrapper[3986]: I0318 08:48:32.011674 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/kube-rbac-proxy-ovn-metrics/0.log" Mar 18 08:48:32.011771 master-0 kubenswrapper[3986]: I0318 08:48:32.011706 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.011801 master-0 kubenswrapper[3986]: I0318 08:48:32.011774 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.011873 master-0 kubenswrapper[3986]: I0318 08:48:32.011834 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.011947 master-0 kubenswrapper[3986]: I0318 08:48:32.011924 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.011947 master-0 kubenswrapper[3986]: I0318 08:48:32.011922 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012004 master-0 kubenswrapper[3986]: I0318 08:48:32.011981 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012004 master-0 kubenswrapper[3986]: I0318 08:48:32.011983 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012051 master-0 kubenswrapper[3986]: I0318 08:48:32.012000 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012051 master-0 kubenswrapper[3986]: I0318 08:48:32.012020 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012107 master-0 kubenswrapper[3986]: I0318 08:48:32.012054 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012107 master-0 kubenswrapper[3986]: I0318 08:48:32.012055 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012160 master-0 kubenswrapper[3986]: I0318 08:48:32.012106 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012197 master-0 kubenswrapper[3986]: I0318 08:48:32.012155 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012246 master-0 kubenswrapper[3986]: I0318 08:48:32.012200 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012298 master-0 kubenswrapper[3986]: I0318 08:48:32.012248 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj9fr\" (UniqueName: \"kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012298 master-0 kubenswrapper[3986]: I0318 08:48:32.012252 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012384 master-0 kubenswrapper[3986]: I0318 08:48:32.012337 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012436 master-0 kubenswrapper[3986]: I0318 08:48:32.012387 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012436 master-0 kubenswrapper[3986]: I0318 08:48:32.012331 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012436 master-0 kubenswrapper[3986]: I0318 08:48:32.012393 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012543 master-0 kubenswrapper[3986]: I0318 08:48:32.012421 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012543 master-0 kubenswrapper[3986]: I0318 08:48:32.012463 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012543 master-0 kubenswrapper[3986]: I0318 08:48:32.012491 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012543 master-0 kubenswrapper[3986]: I0318 08:48:32.012523 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012951 master-0 kubenswrapper[3986]: I0318 08:48:32.012569 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012951 master-0 kubenswrapper[3986]: I0318 08:48:32.012648 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012951 master-0 kubenswrapper[3986]: I0318 08:48:32.012721 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012951 master-0 kubenswrapper[3986]: I0318 08:48:32.012772 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012951 master-0 kubenswrapper[3986]: I0318 08:48:32.012786 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012951 master-0 kubenswrapper[3986]: I0318 08:48:32.012831 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.012951 master-0 kubenswrapper[3986]: I0318 08:48:32.012797 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.013194 master-0 kubenswrapper[3986]: I0318 08:48:32.013046 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.013238 master-0 kubenswrapper[3986]: I0318 08:48:32.013201 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.013277 master-0 kubenswrapper[3986]: I0318 08:48:32.013209 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.013405 master-0 kubenswrapper[3986]: I0318 08:48:32.013242 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.013405 master-0 kubenswrapper[3986]: I0318 08:48:32.013330 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.013484 master-0 kubenswrapper[3986]: I0318 08:48:32.013355 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/kube-rbac-proxy-node/0.log" Mar 18 08:48:32.013636 master-0 kubenswrapper[3986]: I0318 08:48:32.013588 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.014089 master-0 kubenswrapper[3986]: I0318 08:48:32.014063 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/ovn-acl-logging/0.log" Mar 18 08:48:32.014514 master-0 kubenswrapper[3986]: I0318 08:48:32.014492 3986 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fk7vj_824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/ovn-controller/0.log" Mar 18 08:48:32.016491 master-0 kubenswrapper[3986]: I0318 08:48:32.016375 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.016751 master-0 kubenswrapper[3986]: I0318 08:48:32.016725 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.017001 master-0 kubenswrapper[3986]: I0318 08:48:32.016967 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" exitCode=2 Mar 18 08:48:32.017063 master-0 kubenswrapper[3986]: I0318 08:48:32.017007 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" exitCode=0 Mar 18 08:48:32.017063 master-0 kubenswrapper[3986]: I0318 08:48:32.017019 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" exitCode=0 Mar 18 08:48:32.017063 master-0 kubenswrapper[3986]: I0318 08:48:32.017030 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" exitCode=0 Mar 18 08:48:32.017167 master-0 kubenswrapper[3986]: I0318 08:48:32.017111 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" exitCode=143 Mar 18 08:48:32.017167 master-0 kubenswrapper[3986]: I0318 08:48:32.017124 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" exitCode=143 Mar 18 08:48:32.017167 master-0 kubenswrapper[3986]: I0318 08:48:32.017133 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db" exitCode=143 Mar 18 08:48:32.017167 master-0 kubenswrapper[3986]: I0318 08:48:32.017143 3986 generic.go:334] "Generic (PLEG): container finished" podID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" containerID="78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac" exitCode=143 Mar 18 08:48:32.017293 master-0 kubenswrapper[3986]: I0318 08:48:32.017168 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} Mar 18 08:48:32.017293 master-0 kubenswrapper[3986]: I0318 08:48:32.017219 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} Mar 18 08:48:32.017293 master-0 kubenswrapper[3986]: I0318 08:48:32.017234 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} Mar 18 08:48:32.017293 master-0 kubenswrapper[3986]: I0318 08:48:32.017247 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} Mar 18 08:48:32.017293 master-0 kubenswrapper[3986]: I0318 08:48:32.017292 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} Mar 18 08:48:32.017453 master-0 kubenswrapper[3986]: I0318 08:48:32.017307 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} Mar 18 08:48:32.017453 master-0 kubenswrapper[3986]: I0318 08:48:32.017422 3986 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" Mar 18 08:48:32.017518 master-0 kubenswrapper[3986]: I0318 08:48:32.017321 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} Mar 18 08:48:32.017518 master-0 kubenswrapper[3986]: I0318 08:48:32.017503 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} Mar 18 08:48:32.017518 master-0 kubenswrapper[3986]: I0318 08:48:32.017510 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017520 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017538 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017546 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017552 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017559 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017565 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017573 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017580 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017587 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017594 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017603 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} Mar 18 08:48:32.017610 master-0 kubenswrapper[3986]: I0318 08:48:32.017615 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017623 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017631 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017638 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017648 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017655 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017662 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017668 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017674 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017684 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fk7vj" event={"ID":"824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4","Type":"ContainerDied","Data":"6f7e091cc6956264f5530fa4606adc44124201440fb69d366bae9e4dd97d842f"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017694 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017702 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017709 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017716 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017723 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017731 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017739 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017746 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017759 3986 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} Mar 18 08:48:32.018000 master-0 kubenswrapper[3986]: I0318 08:48:32.017776 3986 scope.go:117] "RemoveContainer" containerID="159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" Mar 18 08:48:32.044788 master-0 kubenswrapper[3986]: I0318 08:48:32.044735 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj9fr\" (UniqueName: \"kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.047696 master-0 kubenswrapper[3986]: I0318 08:48:32.047665 3986 scope.go:117] "RemoveContainer" containerID="8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" Mar 18 08:48:32.060793 master-0 kubenswrapper[3986]: I0318 08:48:32.060682 3986 scope.go:117] "RemoveContainer" containerID="8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" Mar 18 08:48:32.077028 master-0 kubenswrapper[3986]: I0318 08:48:32.076972 3986 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fk7vj"] Mar 18 08:48:32.080761 master-0 kubenswrapper[3986]: I0318 08:48:32.080320 3986 scope.go:117] "RemoveContainer" containerID="33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" Mar 18 08:48:32.083961 master-0 kubenswrapper[3986]: I0318 08:48:32.083917 3986 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fk7vj"] Mar 18 08:48:32.090426 master-0 kubenswrapper[3986]: I0318 08:48:32.090400 3986 scope.go:117] "RemoveContainer" containerID="eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" Mar 18 08:48:32.102646 master-0 kubenswrapper[3986]: I0318 08:48:32.102611 3986 scope.go:117] "RemoveContainer" containerID="92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" Mar 18 08:48:32.114184 master-0 kubenswrapper[3986]: I0318 08:48:32.114166 3986 scope.go:117] "RemoveContainer" containerID="d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db" Mar 18 08:48:32.122733 master-0 kubenswrapper[3986]: I0318 08:48:32.122659 3986 scope.go:117] "RemoveContainer" containerID="78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac" Mar 18 08:48:32.134845 master-0 kubenswrapper[3986]: I0318 08:48:32.134759 3986 scope.go:117] "RemoveContainer" containerID="97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8" Mar 18 08:48:32.142427 master-0 kubenswrapper[3986]: I0318 08:48:32.142386 3986 scope.go:117] "RemoveContainer" containerID="159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" Mar 18 08:48:32.142977 master-0 kubenswrapper[3986]: E0318 08:48:32.142950 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": container with ID starting with 159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912 not found: ID does not exist" containerID="159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" Mar 18 08:48:32.143047 master-0 kubenswrapper[3986]: I0318 08:48:32.142985 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} err="failed to get container status \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": rpc error: code = NotFound desc = could not find container \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": container with ID starting with 159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912 not found: ID does not exist" Mar 18 08:48:32.143047 master-0 kubenswrapper[3986]: I0318 08:48:32.143018 3986 scope.go:117] "RemoveContainer" containerID="8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" Mar 18 08:48:32.143540 master-0 kubenswrapper[3986]: E0318 08:48:32.143493 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": container with ID starting with 8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c not found: ID does not exist" containerID="8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" Mar 18 08:48:32.143629 master-0 kubenswrapper[3986]: I0318 08:48:32.143580 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} err="failed to get container status \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": rpc error: code = NotFound desc = could not find container \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": container with ID starting with 8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c not found: ID does not exist" Mar 18 08:48:32.143678 master-0 kubenswrapper[3986]: I0318 08:48:32.143625 3986 scope.go:117] "RemoveContainer" containerID="8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" Mar 18 08:48:32.144123 master-0 kubenswrapper[3986]: E0318 08:48:32.144079 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": container with ID starting with 8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e not found: ID does not exist" containerID="8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" Mar 18 08:48:32.144123 master-0 kubenswrapper[3986]: I0318 08:48:32.144110 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} err="failed to get container status \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": rpc error: code = NotFound desc = could not find container \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": container with ID starting with 8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e not found: ID does not exist" Mar 18 08:48:32.144239 master-0 kubenswrapper[3986]: I0318 08:48:32.144129 3986 scope.go:117] "RemoveContainer" containerID="33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" Mar 18 08:48:32.144614 master-0 kubenswrapper[3986]: E0318 08:48:32.144545 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": container with ID starting with 33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b not found: ID does not exist" containerID="33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" Mar 18 08:48:32.144705 master-0 kubenswrapper[3986]: I0318 08:48:32.144621 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} err="failed to get container status \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": rpc error: code = NotFound desc = could not find container \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": container with ID starting with 33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b not found: ID does not exist" Mar 18 08:48:32.144756 master-0 kubenswrapper[3986]: I0318 08:48:32.144706 3986 scope.go:117] "RemoveContainer" containerID="eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" Mar 18 08:48:32.145152 master-0 kubenswrapper[3986]: E0318 08:48:32.145119 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": container with ID starting with eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc not found: ID does not exist" containerID="eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" Mar 18 08:48:32.145228 master-0 kubenswrapper[3986]: I0318 08:48:32.145151 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} err="failed to get container status \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": rpc error: code = NotFound desc = could not find container \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": container with ID starting with eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc not found: ID does not exist" Mar 18 08:48:32.145228 master-0 kubenswrapper[3986]: I0318 08:48:32.145171 3986 scope.go:117] "RemoveContainer" containerID="92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" Mar 18 08:48:32.145555 master-0 kubenswrapper[3986]: E0318 08:48:32.145519 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": container with ID starting with 92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18 not found: ID does not exist" containerID="92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" Mar 18 08:48:32.145620 master-0 kubenswrapper[3986]: I0318 08:48:32.145564 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} err="failed to get container status \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": rpc error: code = NotFound desc = could not find container \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": container with ID starting with 92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18 not found: ID does not exist" Mar 18 08:48:32.145620 master-0 kubenswrapper[3986]: I0318 08:48:32.145596 3986 scope.go:117] "RemoveContainer" containerID="d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db" Mar 18 08:48:32.146003 master-0 kubenswrapper[3986]: E0318 08:48:32.145966 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": container with ID starting with d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db not found: ID does not exist" containerID="d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db" Mar 18 08:48:32.146066 master-0 kubenswrapper[3986]: I0318 08:48:32.146015 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} err="failed to get container status \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": rpc error: code = NotFound desc = could not find container \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": container with ID starting with d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db not found: ID does not exist" Mar 18 08:48:32.146066 master-0 kubenswrapper[3986]: I0318 08:48:32.146050 3986 scope.go:117] "RemoveContainer" containerID="78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac" Mar 18 08:48:32.146521 master-0 kubenswrapper[3986]: E0318 08:48:32.146497 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": container with ID starting with 78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac not found: ID does not exist" containerID="78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac" Mar 18 08:48:32.146571 master-0 kubenswrapper[3986]: I0318 08:48:32.146524 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} err="failed to get container status \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": rpc error: code = NotFound desc = could not find container \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": container with ID starting with 78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac not found: ID does not exist" Mar 18 08:48:32.146571 master-0 kubenswrapper[3986]: I0318 08:48:32.146546 3986 scope.go:117] "RemoveContainer" containerID="97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8" Mar 18 08:48:32.146947 master-0 kubenswrapper[3986]: E0318 08:48:32.146896 3986 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": container with ID starting with 97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8 not found: ID does not exist" containerID="97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8" Mar 18 08:48:32.147011 master-0 kubenswrapper[3986]: I0318 08:48:32.146947 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} err="failed to get container status \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": rpc error: code = NotFound desc = could not find container \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": container with ID starting with 97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8 not found: ID does not exist" Mar 18 08:48:32.147011 master-0 kubenswrapper[3986]: I0318 08:48:32.146974 3986 scope.go:117] "RemoveContainer" containerID="159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" Mar 18 08:48:32.147301 master-0 kubenswrapper[3986]: I0318 08:48:32.147258 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} err="failed to get container status \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": rpc error: code = NotFound desc = could not find container \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": container with ID starting with 159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912 not found: ID does not exist" Mar 18 08:48:32.147365 master-0 kubenswrapper[3986]: I0318 08:48:32.147281 3986 scope.go:117] "RemoveContainer" containerID="8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" Mar 18 08:48:32.147740 master-0 kubenswrapper[3986]: I0318 08:48:32.147673 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} err="failed to get container status \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": rpc error: code = NotFound desc = could not find container \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": container with ID starting with 8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c not found: ID does not exist" Mar 18 08:48:32.147802 master-0 kubenswrapper[3986]: I0318 08:48:32.147746 3986 scope.go:117] "RemoveContainer" containerID="8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" Mar 18 08:48:32.148206 master-0 kubenswrapper[3986]: I0318 08:48:32.148173 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} err="failed to get container status \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": rpc error: code = NotFound desc = could not find container \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": container with ID starting with 8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e not found: ID does not exist" Mar 18 08:48:32.148206 master-0 kubenswrapper[3986]: I0318 08:48:32.148197 3986 scope.go:117] "RemoveContainer" containerID="33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" Mar 18 08:48:32.148528 master-0 kubenswrapper[3986]: I0318 08:48:32.148481 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} err="failed to get container status \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": rpc error: code = NotFound desc = could not find container \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": container with ID starting with 33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b not found: ID does not exist" Mar 18 08:48:32.148528 master-0 kubenswrapper[3986]: I0318 08:48:32.148523 3986 scope.go:117] "RemoveContainer" containerID="eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" Mar 18 08:48:32.148906 master-0 kubenswrapper[3986]: I0318 08:48:32.148881 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} err="failed to get container status \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": rpc error: code = NotFound desc = could not find container \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": container with ID starting with eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc not found: ID does not exist" Mar 18 08:48:32.148906 master-0 kubenswrapper[3986]: I0318 08:48:32.148904 3986 scope.go:117] "RemoveContainer" containerID="92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" Mar 18 08:48:32.149316 master-0 kubenswrapper[3986]: I0318 08:48:32.149273 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} err="failed to get container status \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": rpc error: code = NotFound desc = could not find container \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": container with ID starting with 92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18 not found: ID does not exist" Mar 18 08:48:32.149316 master-0 kubenswrapper[3986]: I0318 08:48:32.149305 3986 scope.go:117] "RemoveContainer" containerID="d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db" Mar 18 08:48:32.149608 master-0 kubenswrapper[3986]: I0318 08:48:32.149577 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} err="failed to get container status \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": rpc error: code = NotFound desc = could not find container \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": container with ID starting with d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db not found: ID does not exist" Mar 18 08:48:32.149608 master-0 kubenswrapper[3986]: I0318 08:48:32.149599 3986 scope.go:117] "RemoveContainer" containerID="78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac" Mar 18 08:48:32.149953 master-0 kubenswrapper[3986]: I0318 08:48:32.149908 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} err="failed to get container status \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": rpc error: code = NotFound desc = could not find container \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": container with ID starting with 78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac not found: ID does not exist" Mar 18 08:48:32.149953 master-0 kubenswrapper[3986]: I0318 08:48:32.149945 3986 scope.go:117] "RemoveContainer" containerID="97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8" Mar 18 08:48:32.150354 master-0 kubenswrapper[3986]: I0318 08:48:32.150310 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} err="failed to get container status \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": rpc error: code = NotFound desc = could not find container \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": container with ID starting with 97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8 not found: ID does not exist" Mar 18 08:48:32.150399 master-0 kubenswrapper[3986]: I0318 08:48:32.150351 3986 scope.go:117] "RemoveContainer" containerID="159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" Mar 18 08:48:32.150798 master-0 kubenswrapper[3986]: I0318 08:48:32.150738 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} err="failed to get container status \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": rpc error: code = NotFound desc = could not find container \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": container with ID starting with 159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912 not found: ID does not exist" Mar 18 08:48:32.150872 master-0 kubenswrapper[3986]: I0318 08:48:32.150811 3986 scope.go:117] "RemoveContainer" containerID="8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" Mar 18 08:48:32.151179 master-0 kubenswrapper[3986]: I0318 08:48:32.151134 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} err="failed to get container status \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": rpc error: code = NotFound desc = could not find container \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": container with ID starting with 8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c not found: ID does not exist" Mar 18 08:48:32.151179 master-0 kubenswrapper[3986]: I0318 08:48:32.151168 3986 scope.go:117] "RemoveContainer" containerID="8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" Mar 18 08:48:32.151590 master-0 kubenswrapper[3986]: I0318 08:48:32.151537 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} err="failed to get container status \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": rpc error: code = NotFound desc = could not find container \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": container with ID starting with 8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e not found: ID does not exist" Mar 18 08:48:32.151641 master-0 kubenswrapper[3986]: I0318 08:48:32.151610 3986 scope.go:117] "RemoveContainer" containerID="33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" Mar 18 08:48:32.151991 master-0 kubenswrapper[3986]: I0318 08:48:32.151954 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} err="failed to get container status \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": rpc error: code = NotFound desc = could not find container \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": container with ID starting with 33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b not found: ID does not exist" Mar 18 08:48:32.151991 master-0 kubenswrapper[3986]: I0318 08:48:32.151983 3986 scope.go:117] "RemoveContainer" containerID="eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" Mar 18 08:48:32.152328 master-0 kubenswrapper[3986]: I0318 08:48:32.152288 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} err="failed to get container status \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": rpc error: code = NotFound desc = could not find container \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": container with ID starting with eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc not found: ID does not exist" Mar 18 08:48:32.152381 master-0 kubenswrapper[3986]: I0318 08:48:32.152327 3986 scope.go:117] "RemoveContainer" containerID="92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" Mar 18 08:48:32.152631 master-0 kubenswrapper[3986]: I0318 08:48:32.152585 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} err="failed to get container status \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": rpc error: code = NotFound desc = could not find container \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": container with ID starting with 92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18 not found: ID does not exist" Mar 18 08:48:32.152631 master-0 kubenswrapper[3986]: I0318 08:48:32.152625 3986 scope.go:117] "RemoveContainer" containerID="d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db" Mar 18 08:48:32.153061 master-0 kubenswrapper[3986]: I0318 08:48:32.153011 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} err="failed to get container status \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": rpc error: code = NotFound desc = could not find container \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": container with ID starting with d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db not found: ID does not exist" Mar 18 08:48:32.153119 master-0 kubenswrapper[3986]: I0318 08:48:32.153084 3986 scope.go:117] "RemoveContainer" containerID="78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac" Mar 18 08:48:32.153502 master-0 kubenswrapper[3986]: I0318 08:48:32.153456 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} err="failed to get container status \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": rpc error: code = NotFound desc = could not find container \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": container with ID starting with 78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac not found: ID does not exist" Mar 18 08:48:32.153502 master-0 kubenswrapper[3986]: I0318 08:48:32.153496 3986 scope.go:117] "RemoveContainer" containerID="97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8" Mar 18 08:48:32.153910 master-0 kubenswrapper[3986]: I0318 08:48:32.153882 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} err="failed to get container status \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": rpc error: code = NotFound desc = could not find container \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": container with ID starting with 97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8 not found: ID does not exist" Mar 18 08:48:32.153910 master-0 kubenswrapper[3986]: I0318 08:48:32.153903 3986 scope.go:117] "RemoveContainer" containerID="159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" Mar 18 08:48:32.154199 master-0 kubenswrapper[3986]: I0318 08:48:32.154170 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} err="failed to get container status \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": rpc error: code = NotFound desc = could not find container \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": container with ID starting with 159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912 not found: ID does not exist" Mar 18 08:48:32.154199 master-0 kubenswrapper[3986]: I0318 08:48:32.154189 3986 scope.go:117] "RemoveContainer" containerID="8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" Mar 18 08:48:32.154578 master-0 kubenswrapper[3986]: I0318 08:48:32.154525 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} err="failed to get container status \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": rpc error: code = NotFound desc = could not find container \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": container with ID starting with 8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c not found: ID does not exist" Mar 18 08:48:32.154641 master-0 kubenswrapper[3986]: I0318 08:48:32.154596 3986 scope.go:117] "RemoveContainer" containerID="8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" Mar 18 08:48:32.154983 master-0 kubenswrapper[3986]: I0318 08:48:32.154948 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} err="failed to get container status \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": rpc error: code = NotFound desc = could not find container \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": container with ID starting with 8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e not found: ID does not exist" Mar 18 08:48:32.154983 master-0 kubenswrapper[3986]: I0318 08:48:32.154973 3986 scope.go:117] "RemoveContainer" containerID="33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" Mar 18 08:48:32.155266 master-0 kubenswrapper[3986]: I0318 08:48:32.155219 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} err="failed to get container status \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": rpc error: code = NotFound desc = could not find container \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": container with ID starting with 33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b not found: ID does not exist" Mar 18 08:48:32.155266 master-0 kubenswrapper[3986]: I0318 08:48:32.155257 3986 scope.go:117] "RemoveContainer" containerID="eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" Mar 18 08:48:32.155683 master-0 kubenswrapper[3986]: I0318 08:48:32.155637 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} err="failed to get container status \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": rpc error: code = NotFound desc = could not find container \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": container with ID starting with eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc not found: ID does not exist" Mar 18 08:48:32.155740 master-0 kubenswrapper[3986]: I0318 08:48:32.155679 3986 scope.go:117] "RemoveContainer" containerID="92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" Mar 18 08:48:32.156202 master-0 kubenswrapper[3986]: I0318 08:48:32.156170 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} err="failed to get container status \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": rpc error: code = NotFound desc = could not find container \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": container with ID starting with 92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18 not found: ID does not exist" Mar 18 08:48:32.156202 master-0 kubenswrapper[3986]: I0318 08:48:32.156190 3986 scope.go:117] "RemoveContainer" containerID="d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db" Mar 18 08:48:32.156522 master-0 kubenswrapper[3986]: I0318 08:48:32.156482 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db"} err="failed to get container status \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": rpc error: code = NotFound desc = could not find container \"d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db\": container with ID starting with d8a91c4a44989f4812dd7ac354de9969484381c4e7266113795f9ff2399ce5db not found: ID does not exist" Mar 18 08:48:32.156522 master-0 kubenswrapper[3986]: I0318 08:48:32.156518 3986 scope.go:117] "RemoveContainer" containerID="78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac" Mar 18 08:48:32.156994 master-0 kubenswrapper[3986]: I0318 08:48:32.156962 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac"} err="failed to get container status \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": rpc error: code = NotFound desc = could not find container \"78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac\": container with ID starting with 78d1a785d7e310b8a1bf5695969aab6968f0c1f3e38248f24697d1c019bdceac not found: ID does not exist" Mar 18 08:48:32.156994 master-0 kubenswrapper[3986]: I0318 08:48:32.156983 3986 scope.go:117] "RemoveContainer" containerID="97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8" Mar 18 08:48:32.157407 master-0 kubenswrapper[3986]: I0318 08:48:32.157324 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8"} err="failed to get container status \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": rpc error: code = NotFound desc = could not find container \"97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8\": container with ID starting with 97663fadd354644f1be4baa688422665bf9e5c17814bf5baea78674093569de8 not found: ID does not exist" Mar 18 08:48:32.157407 master-0 kubenswrapper[3986]: I0318 08:48:32.157400 3986 scope.go:117] "RemoveContainer" containerID="159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912" Mar 18 08:48:32.157810 master-0 kubenswrapper[3986]: I0318 08:48:32.157779 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912"} err="failed to get container status \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": rpc error: code = NotFound desc = could not find container \"159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912\": container with ID starting with 159d5da7df861d4690d299ee8141f1bfbf00878beec9987a371bdd22a4166912 not found: ID does not exist" Mar 18 08:48:32.157810 master-0 kubenswrapper[3986]: I0318 08:48:32.157801 3986 scope.go:117] "RemoveContainer" containerID="8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c" Mar 18 08:48:32.158335 master-0 kubenswrapper[3986]: I0318 08:48:32.158258 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c"} err="failed to get container status \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": rpc error: code = NotFound desc = could not find container \"8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c\": container with ID starting with 8525272ae57fa1704461815bfe4fbd996a299145b716fba6e370e1c466c00d6c not found: ID does not exist" Mar 18 08:48:32.158390 master-0 kubenswrapper[3986]: I0318 08:48:32.158331 3986 scope.go:117] "RemoveContainer" containerID="8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e" Mar 18 08:48:32.158732 master-0 kubenswrapper[3986]: I0318 08:48:32.158700 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e"} err="failed to get container status \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": rpc error: code = NotFound desc = could not find container \"8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e\": container with ID starting with 8cb5653af0b594d3bf3f7fe8a3074e5cc4ad5ac373787ce8176591db02bd6f0e not found: ID does not exist" Mar 18 08:48:32.158732 master-0 kubenswrapper[3986]: I0318 08:48:32.158723 3986 scope.go:117] "RemoveContainer" containerID="33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b" Mar 18 08:48:32.159143 master-0 kubenswrapper[3986]: I0318 08:48:32.159095 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b"} err="failed to get container status \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": rpc error: code = NotFound desc = could not find container \"33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b\": container with ID starting with 33a34ca8c482f924483736fa5791eeae38cd3286d691804af3e19ac52202b81b not found: ID does not exist" Mar 18 08:48:32.159201 master-0 kubenswrapper[3986]: I0318 08:48:32.159165 3986 scope.go:117] "RemoveContainer" containerID="eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc" Mar 18 08:48:32.159591 master-0 kubenswrapper[3986]: I0318 08:48:32.159512 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc"} err="failed to get container status \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": rpc error: code = NotFound desc = could not find container \"eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc\": container with ID starting with eaab1ff96382a50237b8b272affe76c2722c73879981cd56f9e6f7b3d85937cc not found: ID does not exist" Mar 18 08:48:32.159591 master-0 kubenswrapper[3986]: I0318 08:48:32.159582 3986 scope.go:117] "RemoveContainer" containerID="92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18" Mar 18 08:48:32.159924 master-0 kubenswrapper[3986]: I0318 08:48:32.159895 3986 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18"} err="failed to get container status \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": rpc error: code = NotFound desc = could not find container \"92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18\": container with ID starting with 92b5e9f513e106f14135b5c76c6ac1e43974c3eb14ab62f94ace9906fda39d18 not found: ID does not exist" Mar 18 08:48:32.221650 master-0 kubenswrapper[3986]: I0318 08:48:32.221605 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:32.239307 master-0 kubenswrapper[3986]: W0318 08:48:32.239269 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2207df9e_f21e_4c30_98d5_248ae99c245e.slice/crio-0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe WatchSource:0}: Error finding container 0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe: Status 404 returned error can't find the container with id 0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe Mar 18 08:48:32.427516 master-0 kubenswrapper[3986]: I0318 08:48:32.427411 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:32.427711 master-0 kubenswrapper[3986]: I0318 08:48:32.427450 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:32.427711 master-0 kubenswrapper[3986]: E0318 08:48:32.427652 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:32.427848 master-0 kubenswrapper[3986]: E0318 08:48:32.427806 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:33.025209 master-0 kubenswrapper[3986]: I0318 08:48:33.025082 3986 generic.go:334] "Generic (PLEG): container finished" podID="2207df9e-f21e-4c30-98d5-248ae99c245e" containerID="4ab7ce18ff8c455a08cc88d97fdc9cc8dc555138a8a11da35cc907f8c6e70d0d" exitCode=0 Mar 18 08:48:33.025209 master-0 kubenswrapper[3986]: I0318 08:48:33.025178 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerDied","Data":"4ab7ce18ff8c455a08cc88d97fdc9cc8dc555138a8a11da35cc907f8c6e70d0d"} Mar 18 08:48:33.026124 master-0 kubenswrapper[3986]: I0318 08:48:33.025252 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe"} Mar 18 08:48:33.435472 master-0 kubenswrapper[3986]: I0318 08:48:33.435407 3986 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4" path="/var/lib/kubelet/pods/824d7ce9-e7bd-41ba-b7b1-1811e0f0dec4/volumes" Mar 18 08:48:34.033424 master-0 kubenswrapper[3986]: I0318 08:48:34.033069 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"fe0e8df5caf935c0354f77f32e03f399bf0360ab5c5def16aeb7806cf4e2c57f"} Mar 18 08:48:34.033424 master-0 kubenswrapper[3986]: I0318 08:48:34.033423 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"1788a124afd935c8a235929b8e8aa8ab299b969bfc11fed35bb17436438108e4"} Mar 18 08:48:34.034129 master-0 kubenswrapper[3986]: I0318 08:48:34.033438 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"1c4735e84b7e6475561f64d36620af8b565a1cb2e3f20583ba9b3ddc1c8c1052"} Mar 18 08:48:34.034129 master-0 kubenswrapper[3986]: I0318 08:48:34.033450 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"07a8c689da8bb6cdf48edc729fcd2bb1ee618586ef0884e28fde4046be6eea62"} Mar 18 08:48:34.034129 master-0 kubenswrapper[3986]: I0318 08:48:34.033461 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"807ad30b005ef1095f8696f4e79b1f3c0b489347cc2294e63123b4fd5c00a931"} Mar 18 08:48:34.034129 master-0 kubenswrapper[3986]: I0318 08:48:34.033472 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"0ca31822d490e594e12abc25c53e0bf7bffedf150c78ba7a10182d9619db86ee"} Mar 18 08:48:34.426728 master-0 kubenswrapper[3986]: I0318 08:48:34.426669 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:34.426951 master-0 kubenswrapper[3986]: I0318 08:48:34.426669 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:34.426951 master-0 kubenswrapper[3986]: E0318 08:48:34.426826 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:34.427019 master-0 kubenswrapper[3986]: E0318 08:48:34.426953 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:34.605097 master-0 kubenswrapper[3986]: E0318 08:48:34.604965 3986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 08:48:35.245393 master-0 kubenswrapper[3986]: I0318 08:48:35.245303 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:35.246439 master-0 kubenswrapper[3986]: E0318 08:48:35.245523 3986 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:35.246439 master-0 kubenswrapper[3986]: E0318 08:48:35.245635 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:49:39.245612701 +0000 UTC m=+210.652782823 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:36.426726 master-0 kubenswrapper[3986]: I0318 08:48:36.426567 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:36.426726 master-0 kubenswrapper[3986]: I0318 08:48:36.426630 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:36.427330 master-0 kubenswrapper[3986]: E0318 08:48:36.426757 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:36.427330 master-0 kubenswrapper[3986]: E0318 08:48:36.426911 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:37.052229 master-0 kubenswrapper[3986]: I0318 08:48:37.052146 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"680a37e08f36dfbf781300f9a43175fe1c9286fe16b4eaa16427d6125f60c662"} Mar 18 08:48:37.362926 master-0 kubenswrapper[3986]: I0318 08:48:37.362619 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:37.363217 master-0 kubenswrapper[3986]: E0318 08:48:37.362967 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:48:37.363217 master-0 kubenswrapper[3986]: E0318 08:48:37.363001 3986 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:48:37.363217 master-0 kubenswrapper[3986]: E0318 08:48:37.363021 3986 projected.go:194] Error preparing data for projected volume kube-api-access-l7lrl for pod openshift-network-diagnostics/network-check-target-8b7l7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:37.363217 master-0 kubenswrapper[3986]: E0318 08:48:37.363117 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl podName:fc289a83-9a2e-404b-b148-605639362703 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:09.363089487 +0000 UTC m=+180.770259609 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7lrl" (UniqueName: "kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl") pod "network-check-target-8b7l7" (UID: "fc289a83-9a2e-404b-b148-605639362703") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:38.427748 master-0 kubenswrapper[3986]: I0318 08:48:38.427709 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:38.428102 master-0 kubenswrapper[3986]: E0318 08:48:38.427923 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:38.428246 master-0 kubenswrapper[3986]: I0318 08:48:38.428185 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:38.428291 master-0 kubenswrapper[3986]: E0318 08:48:38.428265 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:39.063076 master-0 kubenswrapper[3986]: I0318 08:48:39.062105 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" event={"ID":"2207df9e-f21e-4c30-98d5-248ae99c245e","Type":"ContainerStarted","Data":"1f84edfbe28897c00a1b673fa68978c1d7fc95836dbe24ff309a4f9d0f05efd7"} Mar 18 08:48:39.063076 master-0 kubenswrapper[3986]: I0318 08:48:39.062464 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:39.063076 master-0 kubenswrapper[3986]: I0318 08:48:39.062523 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:39.085170 master-0 kubenswrapper[3986]: I0318 08:48:39.085082 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" podStartSLOduration=8.085062048 podStartE2EDuration="8.085062048s" podCreationTimestamp="2026-03-18 08:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:39.084648575 +0000 UTC m=+150.491818677" watchObservedRunningTime="2026-03-18 08:48:39.085062048 +0000 UTC m=+150.492232130" Mar 18 08:48:39.089350 master-0 kubenswrapper[3986]: I0318 08:48:39.089314 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:39.605645 master-0 kubenswrapper[3986]: E0318 08:48:39.605519 3986 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 08:48:40.066359 master-0 kubenswrapper[3986]: I0318 08:48:40.066235 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:40.147015 master-0 kubenswrapper[3986]: I0318 08:48:40.146942 3986 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:40.427443 master-0 kubenswrapper[3986]: I0318 08:48:40.426791 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:40.427670 master-0 kubenswrapper[3986]: I0318 08:48:40.427181 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:40.427670 master-0 kubenswrapper[3986]: E0318 08:48:40.427536 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:40.427797 master-0 kubenswrapper[3986]: E0318 08:48:40.427714 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:42.427400 master-0 kubenswrapper[3986]: I0318 08:48:42.427283 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:42.428676 master-0 kubenswrapper[3986]: E0318 08:48:42.427464 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:42.428676 master-0 kubenswrapper[3986]: I0318 08:48:42.427250 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:42.428676 master-0 kubenswrapper[3986]: E0318 08:48:42.427649 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:44.427455 master-0 kubenswrapper[3986]: I0318 08:48:44.427287 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:44.428399 master-0 kubenswrapper[3986]: I0318 08:48:44.427436 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:44.428399 master-0 kubenswrapper[3986]: E0318 08:48:44.427529 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6x85n" podUID="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" Mar 18 08:48:44.428399 master-0 kubenswrapper[3986]: E0318 08:48:44.427626 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8b7l7" podUID="fc289a83-9a2e-404b-b148-605639362703" Mar 18 08:48:44.449758 master-0 kubenswrapper[3986]: W0318 08:48:44.449636 3986 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 08:48:44.451517 master-0 kubenswrapper[3986]: I0318 08:48:44.451462 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 08:48:46.161545 master-0 kubenswrapper[3986]: I0318 08:48:46.161471 3986 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 18 08:48:46.206487 master-0 kubenswrapper[3986]: I0318 08:48:46.204552 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb"] Mar 18 08:48:46.206487 master-0 kubenswrapper[3986]: I0318 08:48:46.206142 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.209878 master-0 kubenswrapper[3986]: I0318 08:48:46.208535 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg"] Mar 18 08:48:46.209878 master-0 kubenswrapper[3986]: I0318 08:48:46.209355 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.210050 master-0 kubenswrapper[3986]: I0318 08:48:46.209953 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth"] Mar 18 08:48:46.213892 master-0 kubenswrapper[3986]: I0318 08:48:46.210347 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 08:48:46.213892 master-0 kubenswrapper[3986]: I0318 08:48:46.210458 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.213892 master-0 kubenswrapper[3986]: I0318 08:48:46.212071 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 08:48:46.213892 master-0 kubenswrapper[3986]: I0318 08:48:46.212423 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.215131 master-0 kubenswrapper[3986]: I0318 08:48:46.215098 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-7h94d"] Mar 18 08:48:46.215375 master-0 kubenswrapper[3986]: I0318 08:48:46.215352 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82"] Mar 18 08:48:46.215833 master-0 kubenswrapper[3986]: I0318 08:48:46.215792 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 08:48:46.216270 master-0 kubenswrapper[3986]: I0318 08:48:46.216177 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj"] Mar 18 08:48:46.216498 master-0 kubenswrapper[3986]: I0318 08:48:46.216475 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 08:48:46.217167 master-0 kubenswrapper[3986]: I0318 08:48:46.216508 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.217167 master-0 kubenswrapper[3986]: I0318 08:48:46.216668 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 08:48:46.217167 master-0 kubenswrapper[3986]: I0318 08:48:46.216946 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.217497 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.217692 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.218491 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.218606 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.218777 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x"] Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.219558 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.220800 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6"] Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.221541 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.222379 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2"] Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.223238 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.225730 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.226637 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh"] Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.227172 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-b9pn7"] Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.227823 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.227945 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.228087 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.231002 master-0 kubenswrapper[3986]: I0318 08:48:46.230968 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq"] Mar 18 08:48:46.232705 master-0 kubenswrapper[3986]: I0318 08:48:46.231131 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:46.232705 master-0 kubenswrapper[3986]: I0318 08:48:46.231315 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j"] Mar 18 08:48:46.232705 master-0 kubenswrapper[3986]: I0318 08:48:46.231564 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9"] Mar 18 08:48:46.232705 master-0 kubenswrapper[3986]: I0318 08:48:46.231839 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.232705 master-0 kubenswrapper[3986]: I0318 08:48:46.232159 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.232705 master-0 kubenswrapper[3986]: I0318 08:48:46.232467 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.235549 master-0 kubenswrapper[3986]: I0318 08:48:46.235513 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 08:48:46.235712 master-0 kubenswrapper[3986]: I0318 08:48:46.235685 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.235828 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.236050 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.236177 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.236200 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.236510 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.236577 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh"] Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.236685 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.236830 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.237032 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.237479 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.237530 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.237615 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.237664 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.237741 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 08:48:46.238501 master-0 kubenswrapper[3986]: I0318 08:48:46.238181 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 08:48:46.304542 master-0 kubenswrapper[3986]: I0318 08:48:46.304363 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.304542 master-0 kubenswrapper[3986]: I0318 08:48:46.304412 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8prf\" (UniqueName: \"kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.304542 master-0 kubenswrapper[3986]: I0318 08:48:46.304437 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.304542 master-0 kubenswrapper[3986]: I0318 08:48:46.304489 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.304542 master-0 kubenswrapper[3986]: I0318 08:48:46.304513 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.304542 master-0 kubenswrapper[3986]: I0318 08:48:46.304533 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:46.304542 master-0 kubenswrapper[3986]: I0318 08:48:46.304557 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304580 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304600 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304620 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w58l\" (UniqueName: \"kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304643 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304675 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfzdk\" (UniqueName: \"kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304694 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304712 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304733 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjmx\" (UniqueName: \"kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304763 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lsw9\" (UniqueName: \"kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304784 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n959l\" (UniqueName: \"kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304807 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304829 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304865 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdhs\" (UniqueName: \"kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.304984 master-0 kubenswrapper[3986]: I0318 08:48:46.304894 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.304916 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.304954 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlp7w\" (UniqueName: \"kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.304970 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.304989 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305015 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305047 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305071 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305098 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305126 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305150 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305180 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305205 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305229 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.305501 master-0 kubenswrapper[3986]: I0318 08:48:46.305254 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305279 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxcn\" (UniqueName: \"kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-j8kgj\" (UID: \"6fb1f871-9c24-48a1-a15a-a636b5bb687d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305305 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305331 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz26d\" (UniqueName: \"kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305348 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305365 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305387 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47p9x\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305404 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305419 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305438 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfjgn\" (UniqueName: \"kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305459 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.306112 master-0 kubenswrapper[3986]: I0318 08:48:46.305479 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9jq\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.316675 master-0 kubenswrapper[3986]: I0318 08:48:46.314586 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 08:48:46.316675 master-0 kubenswrapper[3986]: I0318 08:48:46.314763 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 08:48:46.316675 master-0 kubenswrapper[3986]: I0318 08:48:46.314892 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.316675 master-0 kubenswrapper[3986]: I0318 08:48:46.314584 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 08:48:46.317638 master-0 kubenswrapper[3986]: I0318 08:48:46.317609 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-bcwsv"] Mar 18 08:48:46.317974 master-0 kubenswrapper[3986]: I0318 08:48:46.317940 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 08:48:46.318274 master-0 kubenswrapper[3986]: I0318 08:48:46.318245 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8"] Mar 18 08:48:46.318332 master-0 kubenswrapper[3986]: I0318 08:48:46.318286 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 08:48:46.318585 master-0 kubenswrapper[3986]: I0318 08:48:46.318564 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.318786 master-0 kubenswrapper[3986]: I0318 08:48:46.318764 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.318883 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319010 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319059 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319113 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.318884 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr"] Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319438 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319501 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319552 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319555 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319634 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319637 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319674 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319721 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319780 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319869 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.319991 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.320000 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.320060 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 08:48:46.321041 master-0 kubenswrapper[3986]: I0318 08:48:46.320126 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 08:48:46.330539 master-0 kubenswrapper[3986]: I0318 08:48:46.330507 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 08:48:46.330880 master-0 kubenswrapper[3986]: I0318 08:48:46.330847 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 08:48:46.330995 master-0 kubenswrapper[3986]: I0318 08:48:46.330974 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 08:48:46.331084 master-0 kubenswrapper[3986]: I0318 08:48:46.331068 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 08:48:46.331265 master-0 kubenswrapper[3986]: I0318 08:48:46.331249 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 08:48:46.331558 master-0 kubenswrapper[3986]: I0318 08:48:46.331540 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 08:48:46.331924 master-0 kubenswrapper[3986]: I0318 08:48:46.331908 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.332137 master-0 kubenswrapper[3986]: I0318 08:48:46.332113 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 08:48:46.333148 master-0 kubenswrapper[3986]: I0318 08:48:46.333125 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 08:48:46.333718 master-0 kubenswrapper[3986]: I0318 08:48:46.333670 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=2.333654926 podStartE2EDuration="2.333654926s" podCreationTimestamp="2026-03-18 08:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:46.329962436 +0000 UTC m=+157.737132528" watchObservedRunningTime="2026-03-18 08:48:46.333654926 +0000 UTC m=+157.740825028" Mar 18 08:48:46.336198 master-0 kubenswrapper[3986]: I0318 08:48:46.335394 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64"] Mar 18 08:48:46.336198 master-0 kubenswrapper[3986]: I0318 08:48:46.335741 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz"] Mar 18 08:48:46.336198 master-0 kubenswrapper[3986]: I0318 08:48:46.335969 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7"] Mar 18 08:48:46.336198 master-0 kubenswrapper[3986]: I0318 08:48:46.335994 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.336430 master-0 kubenswrapper[3986]: I0318 08:48:46.336237 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.336430 master-0 kubenswrapper[3986]: I0318 08:48:46.336252 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:46.339598 master-0 kubenswrapper[3986]: I0318 08:48:46.336830 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 08:48:46.339598 master-0 kubenswrapper[3986]: I0318 08:48:46.338617 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf"] Mar 18 08:48:46.339598 master-0 kubenswrapper[3986]: I0318 08:48:46.338879 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg"] Mar 18 08:48:46.339598 master-0 kubenswrapper[3986]: I0318 08:48:46.338892 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb"] Mar 18 08:48:46.339598 master-0 kubenswrapper[3986]: I0318 08:48:46.338934 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.340121 master-0 kubenswrapper[3986]: I0318 08:48:46.340101 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.340460 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.340592 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82"] Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.340679 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x"] Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.340808 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.340940 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.341307 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.341489 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.341646 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 08:48:46.341828 master-0 kubenswrapper[3986]: I0318 08:48:46.341719 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj"] Mar 18 08:48:46.342211 master-0 kubenswrapper[3986]: I0318 08:48:46.342161 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-7h94d"] Mar 18 08:48:46.342700 master-0 kubenswrapper[3986]: I0318 08:48:46.342683 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 08:48:46.342866 master-0 kubenswrapper[3986]: I0318 08:48:46.342822 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 08:48:46.343107 master-0 kubenswrapper[3986]: I0318 08:48:46.343090 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 08:48:46.343359 master-0 kubenswrapper[3986]: I0318 08:48:46.343332 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 08:48:46.343472 master-0 kubenswrapper[3986]: I0318 08:48:46.343459 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 08:48:46.343658 master-0 kubenswrapper[3986]: I0318 08:48:46.342751 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6"] Mar 18 08:48:46.343733 master-0 kubenswrapper[3986]: I0318 08:48:46.343722 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth"] Mar 18 08:48:46.344078 master-0 kubenswrapper[3986]: I0318 08:48:46.344054 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr"] Mar 18 08:48:46.344222 master-0 kubenswrapper[3986]: I0318 08:48:46.344178 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 08:48:46.344414 master-0 kubenswrapper[3986]: I0318 08:48:46.344388 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 08:48:46.344517 master-0 kubenswrapper[3986]: I0318 08:48:46.344199 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 08:48:46.347120 master-0 kubenswrapper[3986]: I0318 08:48:46.347094 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh"] Mar 18 08:48:46.347407 master-0 kubenswrapper[3986]: I0318 08:48:46.347382 3986 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-9mkgd"] Mar 18 08:48:46.349153 master-0 kubenswrapper[3986]: I0318 08:48:46.349127 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.351118 master-0 kubenswrapper[3986]: I0318 08:48:46.351092 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq"] Mar 18 08:48:46.354992 master-0 kubenswrapper[3986]: I0318 08:48:46.354958 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7"] Mar 18 08:48:46.358903 master-0 kubenswrapper[3986]: I0318 08:48:46.358872 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 08:48:46.359831 master-0 kubenswrapper[3986]: I0318 08:48:46.359797 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64"] Mar 18 08:48:46.360752 master-0 kubenswrapper[3986]: I0318 08:48:46.360726 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 08:48:46.362941 master-0 kubenswrapper[3986]: I0318 08:48:46.362912 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2"] Mar 18 08:48:46.364846 master-0 kubenswrapper[3986]: I0318 08:48:46.364800 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh"] Mar 18 08:48:46.368108 master-0 kubenswrapper[3986]: I0318 08:48:46.368072 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9"] Mar 18 08:48:46.372349 master-0 kubenswrapper[3986]: I0318 08:48:46.372307 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-bcwsv"] Mar 18 08:48:46.375324 master-0 kubenswrapper[3986]: I0318 08:48:46.375290 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8"] Mar 18 08:48:46.376790 master-0 kubenswrapper[3986]: I0318 08:48:46.376752 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz"] Mar 18 08:48:46.377763 master-0 kubenswrapper[3986]: I0318 08:48:46.377749 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf"] Mar 18 08:48:46.378666 master-0 kubenswrapper[3986]: I0318 08:48:46.378643 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j"] Mar 18 08:48:46.379439 master-0 kubenswrapper[3986]: I0318 08:48:46.379423 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-b9pn7"] Mar 18 08:48:46.405754 master-0 kubenswrapper[3986]: I0318 08:48:46.405715 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.405754 master-0 kubenswrapper[3986]: I0318 08:48:46.405751 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfjgn\" (UniqueName: \"kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.405942 master-0 kubenswrapper[3986]: I0318 08:48:46.405904 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxvk7\" (UniqueName: \"kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.405986 master-0 kubenswrapper[3986]: I0318 08:48:46.405943 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw27k\" (UniqueName: \"kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.406018 master-0 kubenswrapper[3986]: I0318 08:48:46.405994 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:46.406050 master-0 kubenswrapper[3986]: I0318 08:48:46.406020 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.406094 master-0 kubenswrapper[3986]: I0318 08:48:46.406050 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.406094 master-0 kubenswrapper[3986]: I0318 08:48:46.406081 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9jq\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.406297 master-0 kubenswrapper[3986]: I0318 08:48:46.406269 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.406406 master-0 kubenswrapper[3986]: I0318 08:48:46.406341 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.406458 master-0 kubenswrapper[3986]: I0318 08:48:46.406406 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.406458 master-0 kubenswrapper[3986]: I0318 08:48:46.406441 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.406544 master-0 kubenswrapper[3986]: I0318 08:48:46.406480 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.406544 master-0 kubenswrapper[3986]: I0318 08:48:46.406502 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8prf\" (UniqueName: \"kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.406544 master-0 kubenswrapper[3986]: I0318 08:48:46.406522 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.406663 master-0 kubenswrapper[3986]: I0318 08:48:46.406568 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.406663 master-0 kubenswrapper[3986]: I0318 08:48:46.406590 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.406663 master-0 kubenswrapper[3986]: I0318 08:48:46.406607 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.406663 master-0 kubenswrapper[3986]: I0318 08:48:46.406625 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hxtz\" (UniqueName: \"kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:46.407132 master-0 kubenswrapper[3986]: I0318 08:48:46.407104 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:46.407273 master-0 kubenswrapper[3986]: I0318 08:48:46.407251 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.407431 master-0 kubenswrapper[3986]: I0318 08:48:46.407401 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.407547 master-0 kubenswrapper[3986]: I0318 08:48:46.407528 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.407632 master-0 kubenswrapper[3986]: I0318 08:48:46.407398 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.407691 master-0 kubenswrapper[3986]: I0318 08:48:46.407671 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.407788 master-0 kubenswrapper[3986]: I0318 08:48:46.407769 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.407892 master-0 kubenswrapper[3986]: E0318 08:48:46.407202 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:46.408036 master-0 kubenswrapper[3986]: E0318 08:48:46.408009 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.907987382 +0000 UTC m=+158.315157554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:46.408692 master-0 kubenswrapper[3986]: I0318 08:48:46.408561 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w58l\" (UniqueName: \"kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.408692 master-0 kubenswrapper[3986]: I0318 08:48:46.408618 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.408810 master-0 kubenswrapper[3986]: E0318 08:48:46.408729 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:46.408810 master-0 kubenswrapper[3986]: E0318 08:48:46.408777 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.908762905 +0000 UTC m=+158.315932987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:48:46.408917 master-0 kubenswrapper[3986]: I0318 08:48:46.408888 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.408959 master-0 kubenswrapper[3986]: I0318 08:48:46.408925 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfzdk\" (UniqueName: \"kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:46.408959 master-0 kubenswrapper[3986]: I0318 08:48:46.408951 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.409034 master-0 kubenswrapper[3986]: I0318 08:48:46.408972 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.409071 master-0 kubenswrapper[3986]: I0318 08:48:46.409042 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjmx\" (UniqueName: \"kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.409185 master-0 kubenswrapper[3986]: I0318 08:48:46.409103 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lsw9\" (UniqueName: \"kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.409185 master-0 kubenswrapper[3986]: E0318 08:48:46.409051 3986 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:46.409185 master-0 kubenswrapper[3986]: E0318 08:48:46.409184 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.909165637 +0000 UTC m=+158.316335719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:48:46.409371 master-0 kubenswrapper[3986]: I0318 08:48:46.409316 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svdhs\" (UniqueName: \"kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.409371 master-0 kubenswrapper[3986]: I0318 08:48:46.409357 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n959l\" (UniqueName: \"kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.409455 master-0 kubenswrapper[3986]: I0318 08:48:46.409386 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.409455 master-0 kubenswrapper[3986]: I0318 08:48:46.409412 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.409455 master-0 kubenswrapper[3986]: I0318 08:48:46.409436 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.409559 master-0 kubenswrapper[3986]: I0318 08:48:46.409470 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.409559 master-0 kubenswrapper[3986]: I0318 08:48:46.409499 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.409559 master-0 kubenswrapper[3986]: I0318 08:48:46.409523 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.409559 master-0 kubenswrapper[3986]: I0318 08:48:46.409554 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.409694 master-0 kubenswrapper[3986]: I0318 08:48:46.409583 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlp7w\" (UniqueName: \"kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:46.409694 master-0 kubenswrapper[3986]: I0318 08:48:46.409611 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:46.410665 master-0 kubenswrapper[3986]: I0318 08:48:46.410626 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.411415 master-0 kubenswrapper[3986]: I0318 08:48:46.411372 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.411723 master-0 kubenswrapper[3986]: E0318 08:48:46.411608 3986 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:46.411723 master-0 kubenswrapper[3986]: E0318 08:48:46.411664 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.911647262 +0000 UTC m=+158.318817344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:48:46.411723 master-0 kubenswrapper[3986]: I0318 08:48:46.411717 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.411875 master-0 kubenswrapper[3986]: I0318 08:48:46.411775 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.411875 master-0 kubenswrapper[3986]: I0318 08:48:46.411809 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.411875 master-0 kubenswrapper[3986]: I0318 08:48:46.411848 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.411996 master-0 kubenswrapper[3986]: I0318 08:48:46.411889 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:46.411996 master-0 kubenswrapper[3986]: I0318 08:48:46.411920 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msp8\" (UniqueName: \"kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.411996 master-0 kubenswrapper[3986]: I0318 08:48:46.411952 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.411996 master-0 kubenswrapper[3986]: I0318 08:48:46.411975 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.412142 master-0 kubenswrapper[3986]: I0318 08:48:46.412000 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hn9w\" (UniqueName: \"kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:46.412142 master-0 kubenswrapper[3986]: I0318 08:48:46.412027 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.412142 master-0 kubenswrapper[3986]: I0318 08:48:46.412052 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.412142 master-0 kubenswrapper[3986]: I0318 08:48:46.412083 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.412810 master-0 kubenswrapper[3986]: I0318 08:48:46.412770 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxcn\" (UniqueName: \"kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-j8kgj\" (UID: \"6fb1f871-9c24-48a1-a15a-a636b5bb687d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 08:48:46.412897 master-0 kubenswrapper[3986]: I0318 08:48:46.412839 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.413912 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.413976 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414031 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrdc\" (UniqueName: \"kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414089 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414115 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz26d\" (UniqueName: \"kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414146 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414169 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414197 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414226 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414249 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:46.414380 master-0 kubenswrapper[3986]: I0318 08:48:46.414276 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47p9x\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.414963 master-0 kubenswrapper[3986]: I0318 08:48:46.414940 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.415058 master-0 kubenswrapper[3986]: E0318 08:48:46.413170 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:46.415110 master-0 kubenswrapper[3986]: E0318 08:48:46.415093 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.915077404 +0000 UTC m=+158.322247486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:46.415110 master-0 kubenswrapper[3986]: I0318 08:48:46.412971 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.415216 master-0 kubenswrapper[3986]: I0318 08:48:46.414948 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.415506 master-0 kubenswrapper[3986]: I0318 08:48:46.415462 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.415708 master-0 kubenswrapper[3986]: I0318 08:48:46.415646 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.415910 master-0 kubenswrapper[3986]: E0318 08:48:46.415881 3986 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:46.417114 master-0 kubenswrapper[3986]: E0318 08:48:46.415934 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.915918429 +0000 UTC m=+158.323088631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:48:46.417114 master-0 kubenswrapper[3986]: I0318 08:48:46.416249 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.417114 master-0 kubenswrapper[3986]: I0318 08:48:46.416278 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.417114 master-0 kubenswrapper[3986]: E0318 08:48:46.416432 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:46.417114 master-0 kubenswrapper[3986]: E0318 08:48:46.416473 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.916458995 +0000 UTC m=+158.323629177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:46.417114 master-0 kubenswrapper[3986]: I0318 08:48:46.416823 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.417114 master-0 kubenswrapper[3986]: I0318 08:48:46.417015 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.417431 master-0 kubenswrapper[3986]: I0318 08:48:46.417204 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.417431 master-0 kubenswrapper[3986]: I0318 08:48:46.417350 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.417431 master-0 kubenswrapper[3986]: I0318 08:48:46.417412 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.418981 master-0 kubenswrapper[3986]: I0318 08:48:46.418280 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.418981 master-0 kubenswrapper[3986]: I0318 08:48:46.418913 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.419502 master-0 kubenswrapper[3986]: I0318 08:48:46.419474 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.420132 master-0 kubenswrapper[3986]: I0318 08:48:46.420107 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.435875 master-0 kubenswrapper[3986]: I0318 08:48:46.421651 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.435875 master-0 kubenswrapper[3986]: I0318 08:48:46.425242 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9jq\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.435875 master-0 kubenswrapper[3986]: I0318 08:48:46.426551 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:46.435875 master-0 kubenswrapper[3986]: I0318 08:48:46.426904 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:46.435875 master-0 kubenswrapper[3986]: I0318 08:48:46.426938 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfjgn\" (UniqueName: \"kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.445763 master-0 kubenswrapper[3986]: I0318 08:48:46.444900 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lsw9\" (UniqueName: \"kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.445763 master-0 kubenswrapper[3986]: I0318 08:48:46.444983 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxcn\" (UniqueName: \"kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-j8kgj\" (UID: \"6fb1f871-9c24-48a1-a15a-a636b5bb687d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 08:48:46.445763 master-0 kubenswrapper[3986]: I0318 08:48:46.445160 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfzdk\" (UniqueName: \"kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:46.445763 master-0 kubenswrapper[3986]: I0318 08:48:46.445489 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlp7w\" (UniqueName: \"kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:46.447387 master-0 kubenswrapper[3986]: I0318 08:48:46.446408 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8prf\" (UniqueName: \"kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.447387 master-0 kubenswrapper[3986]: I0318 08:48:46.446774 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz26d\" (UniqueName: \"kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:46.447387 master-0 kubenswrapper[3986]: I0318 08:48:46.446807 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 08:48:46.447387 master-0 kubenswrapper[3986]: I0318 08:48:46.446940 3986 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 08:48:46.447531 master-0 kubenswrapper[3986]: I0318 08:48:46.447462 3986 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 08:48:46.447957 master-0 kubenswrapper[3986]: I0318 08:48:46.447829 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w58l\" (UniqueName: \"kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.447957 master-0 kubenswrapper[3986]: I0318 08:48:46.447930 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.449806 master-0 kubenswrapper[3986]: I0318 08:48:46.449684 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjmx\" (UniqueName: \"kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.449806 master-0 kubenswrapper[3986]: I0318 08:48:46.449772 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdhs\" (UniqueName: \"kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.450245 master-0 kubenswrapper[3986]: I0318 08:48:46.450199 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47p9x\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.450245 master-0 kubenswrapper[3986]: I0318 08:48:46.450232 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.451282 master-0 kubenswrapper[3986]: I0318 08:48:46.451266 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.451395 master-0 kubenswrapper[3986]: I0318 08:48:46.451342 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.452605 master-0 kubenswrapper[3986]: I0318 08:48:46.452564 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n959l\" (UniqueName: \"kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.481650 master-0 kubenswrapper[3986]: I0318 08:48:46.481618 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:46.515011 master-0 kubenswrapper[3986]: I0318 08:48:46.514778 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.515011 master-0 kubenswrapper[3986]: I0318 08:48:46.514809 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hn9w\" (UniqueName: \"kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:46.515011 master-0 kubenswrapper[3986]: I0318 08:48:46.514830 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrdc\" (UniqueName: \"kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.515011 master-0 kubenswrapper[3986]: I0318 08:48:46.514867 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.515011 master-0 kubenswrapper[3986]: I0318 08:48:46.514905 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:46.515011 master-0 kubenswrapper[3986]: I0318 08:48:46.514937 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.515011 master-0 kubenswrapper[3986]: E0318 08:48:46.515037 3986 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: E0318 08:48:46.515051 3986 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: E0318 08:48:46.515078 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.015064356 +0000 UTC m=+158.422234438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.515535 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw27k\" (UniqueName: \"kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.515606 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.515680 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxvk7\" (UniqueName: \"kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.515787 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.515871 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.515918 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.515942 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.515964 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.516060 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.516155 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.516183 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hxtz\" (UniqueName: \"kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.516223 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.516308 master-0 kubenswrapper[3986]: I0318 08:48:46.516256 3986 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftdvp\" (UniqueName: \"kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.517174 master-0 kubenswrapper[3986]: I0318 08:48:46.516299 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.517174 master-0 kubenswrapper[3986]: I0318 08:48:46.516372 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.517174 master-0 kubenswrapper[3986]: I0318 08:48:46.516397 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.517174 master-0 kubenswrapper[3986]: I0318 08:48:46.516430 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.517174 master-0 kubenswrapper[3986]: E0318 08:48:46.516784 3986 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:46.517174 master-0 kubenswrapper[3986]: E0318 08:48:46.516846 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.016819368 +0000 UTC m=+158.423989530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:48:46.517174 master-0 kubenswrapper[3986]: I0318 08:48:46.516917 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msp8\" (UniqueName: \"kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.517174 master-0 kubenswrapper[3986]: E0318 08:48:46.516977 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.016955152 +0000 UTC m=+158.424125314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:46.518062 master-0 kubenswrapper[3986]: E0318 08:48:46.518034 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:46.518345 master-0 kubenswrapper[3986]: E0318 08:48:46.518070 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.018060395 +0000 UTC m=+158.425230477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:48:46.518906 master-0 kubenswrapper[3986]: I0318 08:48:46.518868 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.520938 master-0 kubenswrapper[3986]: I0318 08:48:46.519567 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.520938 master-0 kubenswrapper[3986]: I0318 08:48:46.519657 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.520938 master-0 kubenswrapper[3986]: I0318 08:48:46.519658 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.520938 master-0 kubenswrapper[3986]: I0318 08:48:46.520267 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.520938 master-0 kubenswrapper[3986]: I0318 08:48:46.520819 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.521483 master-0 kubenswrapper[3986]: I0318 08:48:46.521437 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.522363 master-0 kubenswrapper[3986]: I0318 08:48:46.522313 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.522640 master-0 kubenswrapper[3986]: I0318 08:48:46.522613 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.523298 master-0 kubenswrapper[3986]: I0318 08:48:46.523267 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.525210 master-0 kubenswrapper[3986]: I0318 08:48:46.525194 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:46.534580 master-0 kubenswrapper[3986]: I0318 08:48:46.533906 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:46.543330 master-0 kubenswrapper[3986]: I0318 08:48:46.542889 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:46.545655 master-0 kubenswrapper[3986]: I0318 08:48:46.544786 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hn9w\" (UniqueName: \"kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:46.554930 master-0 kubenswrapper[3986]: I0318 08:48:46.554884 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:46.595919 master-0 kubenswrapper[3986]: I0318 08:48:46.589035 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.610289 master-0 kubenswrapper[3986]: I0318 08:48:46.610082 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msp8\" (UniqueName: \"kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:46.617052 master-0 kubenswrapper[3986]: I0318 08:48:46.611045 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:46.618942 master-0 kubenswrapper[3986]: I0318 08:48:46.618700 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.618942 master-0 kubenswrapper[3986]: I0318 08:48:46.618809 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.618942 master-0 kubenswrapper[3986]: I0318 08:48:46.618915 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdvp\" (UniqueName: \"kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.620402 master-0 kubenswrapper[3986]: I0318 08:48:46.620228 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.626691 master-0 kubenswrapper[3986]: I0318 08:48:46.623292 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.640326 master-0 kubenswrapper[3986]: I0318 08:48:46.632649 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrdc\" (UniqueName: \"kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:46.640326 master-0 kubenswrapper[3986]: I0318 08:48:46.633336 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:46.644014 master-0 kubenswrapper[3986]: I0318 08:48:46.643864 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw27k\" (UniqueName: \"kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.656308 master-0 kubenswrapper[3986]: I0318 08:48:46.656266 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:46.668696 master-0 kubenswrapper[3986]: I0318 08:48:46.668597 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxvk7\" (UniqueName: \"kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.718234 master-0 kubenswrapper[3986]: I0318 08:48:46.692211 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2"] Mar 18 08:48:46.718234 master-0 kubenswrapper[3986]: I0318 08:48:46.696085 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hxtz\" (UniqueName: \"kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:46.718234 master-0 kubenswrapper[3986]: I0318 08:48:46.697945 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 08:48:46.718234 master-0 kubenswrapper[3986]: I0318 08:48:46.707798 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:46.738025 master-0 kubenswrapper[3986]: I0318 08:48:46.737534 3986 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdvp\" (UniqueName: \"kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:46.788235 master-0 kubenswrapper[3986]: I0318 08:48:46.784290 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j"] Mar 18 08:48:46.789080 master-0 kubenswrapper[3986]: I0318 08:48:46.788803 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq"] Mar 18 08:48:46.797246 master-0 kubenswrapper[3986]: W0318 08:48:46.795454 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod772bc250_2e57_4ce0_883c_d44281fcb0be.slice/crio-301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1 WatchSource:0}: Error finding container 301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1: Status 404 returned error can't find the container with id 301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1 Mar 18 08:48:46.801835 master-0 kubenswrapper[3986]: W0318 08:48:46.801760 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod939efa41_8f40_4f91_bee4_0425aead9760.slice/crio-dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185 WatchSource:0}: Error finding container dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185: Status 404 returned error can't find the container with id dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185 Mar 18 08:48:46.843623 master-0 kubenswrapper[3986]: I0318 08:48:46.843581 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x"] Mar 18 08:48:46.843943 master-0 kubenswrapper[3986]: I0318 08:48:46.843898 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh"] Mar 18 08:48:46.869360 master-0 kubenswrapper[3986]: I0318 08:48:46.867779 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:46.894166 master-0 kubenswrapper[3986]: I0318 08:48:46.894129 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: I0318 08:48:46.929274 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: I0318 08:48:46.929334 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: I0318 08:48:46.929371 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: I0318 08:48:46.929412 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: I0318 08:48:46.929438 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: I0318 08:48:46.929464 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: I0318 08:48:46.929487 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: E0318 08:48:46.929632 3986 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: E0318 08:48:46.929645 3986 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: E0318 08:48:46.929692 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.92967157 +0000 UTC m=+159.336841652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: E0318 08:48:46.929706 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.92969988 +0000 UTC m=+159.336869962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: E0318 08:48:46.929730 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: E0318 08:48:46.929744 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: E0318 08:48:46.929767 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.929751022 +0000 UTC m=+159.336921184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:46.933334 master-0 kubenswrapper[3986]: E0318 08:48:46.929786 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:46.934902 master-0 kubenswrapper[3986]: E0318 08:48:46.929787 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.929778103 +0000 UTC m=+159.336948315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:46.934902 master-0 kubenswrapper[3986]: E0318 08:48:46.929805 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.929799743 +0000 UTC m=+159.336969825 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:48:46.934902 master-0 kubenswrapper[3986]: E0318 08:48:46.930274 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:46.934902 master-0 kubenswrapper[3986]: E0318 08:48:46.930311 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.930297508 +0000 UTC m=+159.337467680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:46.934902 master-0 kubenswrapper[3986]: E0318 08:48:46.930359 3986 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:46.934902 master-0 kubenswrapper[3986]: E0318 08:48:46.930385 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.930376861 +0000 UTC m=+159.337547023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:48:46.945216 master-0 kubenswrapper[3986]: I0318 08:48:46.945185 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7"] Mar 18 08:48:46.945216 master-0 kubenswrapper[3986]: I0318 08:48:46.945226 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb"] Mar 18 08:48:46.953265 master-0 kubenswrapper[3986]: W0318 08:48:46.953231 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6ab2be_d018_4fd5_bfbb_6b88aec28663.slice/crio-fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc WatchSource:0}: Error finding container fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc: Status 404 returned error can't find the container with id fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc Mar 18 08:48:46.980723 master-0 kubenswrapper[3986]: I0318 08:48:46.980687 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg"] Mar 18 08:48:46.996351 master-0 kubenswrapper[3986]: I0318 08:48:46.996291 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82"] Mar 18 08:48:46.996862 master-0 kubenswrapper[3986]: I0318 08:48:46.996655 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj"] Mar 18 08:48:47.004940 master-0 kubenswrapper[3986]: W0318 08:48:47.004897 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fb1f871_9c24_48a1_a15a_a636b5bb687d.slice/crio-7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c WatchSource:0}: Error finding container 7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c: Status 404 returned error can't find the container with id 7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c Mar 18 08:48:47.027478 master-0 kubenswrapper[3986]: I0318 08:48:47.027328 3986 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:47.030846 master-0 kubenswrapper[3986]: I0318 08:48:47.030780 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:47.030846 master-0 kubenswrapper[3986]: I0318 08:48:47.030830 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:47.031232 master-0 kubenswrapper[3986]: I0318 08:48:47.031185 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:47.031282 master-0 kubenswrapper[3986]: I0318 08:48:47.031267 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:47.031918 master-0 kubenswrapper[3986]: E0318 08:48:47.031767 3986 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:47.031918 master-0 kubenswrapper[3986]: E0318 08:48:47.031825 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:48:48.031802945 +0000 UTC m=+159.438973027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:48:47.031918 master-0 kubenswrapper[3986]: E0318 08:48:47.031890 3986 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:47.031918 master-0 kubenswrapper[3986]: E0318 08:48:47.031914 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:48:48.031906308 +0000 UTC m=+159.439076390 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:48:47.032041 master-0 kubenswrapper[3986]: E0318 08:48:47.031956 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:47.032041 master-0 kubenswrapper[3986]: E0318 08:48:47.031976 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:48.03197001 +0000 UTC m=+159.439140092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:48:47.033925 master-0 kubenswrapper[3986]: E0318 08:48:47.032238 3986 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:47.033925 master-0 kubenswrapper[3986]: E0318 08:48:47.032276 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:48.032263979 +0000 UTC m=+159.439434061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:47.051734 master-0 kubenswrapper[3986]: W0318 08:48:47.051701 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod866c259c_7661_4a80_873b_6fd625218665.slice/crio-263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481 WatchSource:0}: Error finding container 263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481: Status 404 returned error can't find the container with id 263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481 Mar 18 08:48:47.081984 master-0 kubenswrapper[3986]: I0318 08:48:47.081942 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz"] Mar 18 08:48:47.088426 master-0 kubenswrapper[3986]: W0318 08:48:47.088369 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc110b293_2c6b_496b_b015_23aada98cb4b.slice/crio-17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d WatchSource:0}: Error finding container 17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d: Status 404 returned error can't find the container with id 17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d Mar 18 08:48:47.091277 master-0 kubenswrapper[3986]: I0318 08:48:47.091244 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" event={"ID":"939efa41-8f40-4f91-bee4-0425aead9760","Type":"ContainerStarted","Data":"dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185"} Mar 18 08:48:47.092071 master-0 kubenswrapper[3986]: I0318 08:48:47.092048 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" event={"ID":"772bc250-2e57-4ce0-883c-d44281fcb0be","Type":"ContainerStarted","Data":"301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1"} Mar 18 08:48:47.094430 master-0 kubenswrapper[3986]: I0318 08:48:47.094404 3986 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8"] Mar 18 08:48:47.094531 master-0 kubenswrapper[3986]: I0318 08:48:47.094510 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" event={"ID":"260c8aa5-a288-4ee8-b671-f97e90a2f39c","Type":"ContainerStarted","Data":"23865ef5bfea471643359580ecae55517bf670fdb3b8b05c871c139fe34b55d5"} Mar 18 08:48:47.095644 master-0 kubenswrapper[3986]: I0318 08:48:47.095621 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" event={"ID":"5982111d-f4c6-4335-9b40-3142758fc2bc","Type":"ContainerStarted","Data":"2e229ef6f57fea8e5406ee6259b2efa0f8a16c288c8a29c71c1e32c057bf84d0"} Mar 18 08:48:47.096262 master-0 kubenswrapper[3986]: I0318 08:48:47.096238 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" event={"ID":"fcf89a76-7a94-46d3-853e-68e986563764","Type":"ContainerStarted","Data":"837527d2f9f7319ea14fc20367ef17853e00cc20e938fc1184f891aa57296deb"} Mar 18 08:48:47.096767 master-0 kubenswrapper[3986]: I0318 08:48:47.096737 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerStarted","Data":"b42865dcd2dae3a2390972bbf267cd467643023a4c8d222016e0b44a61943afc"} Mar 18 08:48:47.097442 master-0 kubenswrapper[3986]: I0318 08:48:47.097337 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" event={"ID":"8a6ab2be-d018-4fd5-bfbb-6b88aec28663","Type":"ContainerStarted","Data":"fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc"} Mar 18 08:48:47.097880 master-0 kubenswrapper[3986]: I0318 08:48:47.097864 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9mkgd" event={"ID":"866c259c-7661-4a80-873b-6fd625218665","Type":"ContainerStarted","Data":"263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481"} Mar 18 08:48:47.098395 master-0 kubenswrapper[3986]: I0318 08:48:47.098373 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" event={"ID":"ec11012b-536a-422f-afc4-d2d0fd4b67fb","Type":"ContainerStarted","Data":"f1fbd15a6f55efb9df34e794516a926fbd6cd9758a5312e86f1eb743de9e13b5"} Mar 18 08:48:47.099019 master-0 kubenswrapper[3986]: I0318 08:48:47.098996 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" event={"ID":"6fb1f871-9c24-48a1-a15a-a636b5bb687d","Type":"ContainerStarted","Data":"7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c"} Mar 18 08:48:47.099549 master-0 kubenswrapper[3986]: I0318 08:48:47.099528 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" event={"ID":"e2ade7e6-cecd-4e98-8f85-ea8219303d75","Type":"ContainerStarted","Data":"15b9cae2d28df4fa59242b209b16efd412d30453ba1d9f0bfc42c07c896efdb2"} Mar 18 08:48:47.100280 master-0 kubenswrapper[3986]: W0318 08:48:47.100253 3986 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0280499_8277_46f0_bd8c_058a47a99e19.slice/crio-a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643 WatchSource:0}: Error finding container a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643: Status 404 returned error can't find the container with id a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643 Mar 18 08:48:47.102086 master-0 kubenswrapper[3986]: E0318 08:48:47.102046 3986 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxvk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-b865698dc-g2lc8_openshift-service-ca-operator(b0280499-8277-46f0-bd8c-058a47a99e19): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 18 08:48:47.103392 master-0 kubenswrapper[3986]: E0318 08:48:47.103346 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" podUID="b0280499-8277-46f0-bd8c-058a47a99e19" Mar 18 08:48:47.940588 master-0 kubenswrapper[3986]: I0318 08:48:47.940528 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:47.940588 master-0 kubenswrapper[3986]: I0318 08:48:47.940591 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: I0318 08:48:47.940632 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: I0318 08:48:47.940672 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: I0318 08:48:47.940696 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: I0318 08:48:47.940719 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: I0318 08:48:47.940741 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.940975 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941006 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941039 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:49.941020749 +0000 UTC m=+161.348190841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941065 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941086 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:49.94106662 +0000 UTC m=+161.348236702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941104 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:49.941093691 +0000 UTC m=+161.348263773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941121 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941202 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:49.941182334 +0000 UTC m=+161.348352416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941246 3986 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941275 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:49.941267716 +0000 UTC m=+161.348437798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:48:47.941619 master-0 kubenswrapper[3986]: E0318 08:48:47.941324 3986 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:47.942102 master-0 kubenswrapper[3986]: E0318 08:48:47.941543 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:49.941537754 +0000 UTC m=+161.348707836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:48:47.942102 master-0 kubenswrapper[3986]: E0318 08:48:47.941569 3986 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:47.942102 master-0 kubenswrapper[3986]: E0318 08:48:47.941596 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:49.941589116 +0000 UTC m=+161.348759198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:48:48.041986 master-0 kubenswrapper[3986]: I0318 08:48:48.041947 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:48.041986 master-0 kubenswrapper[3986]: I0318 08:48:48.041983 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:48.042192 master-0 kubenswrapper[3986]: I0318 08:48:48.042004 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:48.042192 master-0 kubenswrapper[3986]: I0318 08:48:48.042024 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:48.042308 master-0 kubenswrapper[3986]: E0318 08:48:48.042237 3986 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:48.044147 master-0 kubenswrapper[3986]: E0318 08:48:48.042370 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:50.04234098 +0000 UTC m=+161.449511062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:48.044147 master-0 kubenswrapper[3986]: E0318 08:48:48.043061 3986 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:48.044147 master-0 kubenswrapper[3986]: E0318 08:48:48.043088 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:48:50.043080562 +0000 UTC m=+161.450250644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:48:48.044147 master-0 kubenswrapper[3986]: E0318 08:48:48.043138 3986 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:48.044147 master-0 kubenswrapper[3986]: E0318 08:48:48.043156 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:48:50.043150444 +0000 UTC m=+161.450320526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:48:48.044147 master-0 kubenswrapper[3986]: E0318 08:48:48.043196 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:48.044147 master-0 kubenswrapper[3986]: E0318 08:48:48.043214 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:50.043208436 +0000 UTC m=+161.450378528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:48:48.105710 master-0 kubenswrapper[3986]: I0318 08:48:48.105654 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" event={"ID":"b0280499-8277-46f0-bd8c-058a47a99e19","Type":"ContainerStarted","Data":"a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643"} Mar 18 08:48:48.114910 master-0 kubenswrapper[3986]: E0318 08:48:48.111391 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" podUID="b0280499-8277-46f0-bd8c-058a47a99e19" Mar 18 08:48:48.118932 master-0 kubenswrapper[3986]: I0318 08:48:48.118832 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" event={"ID":"c110b293-2c6b-496b-b015-23aada98cb4b","Type":"ContainerStarted","Data":"17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d"} Mar 18 08:48:48.132927 master-0 kubenswrapper[3986]: I0318 08:48:48.124976 3986 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" event={"ID":"5982111d-f4c6-4335-9b40-3142758fc2bc","Type":"ContainerStarted","Data":"9375c67121087e2f83dd2c8b94c0ff17721fa9588235ead108bb8a1e451225b5"} Mar 18 08:48:49.131017 master-0 kubenswrapper[3986]: E0318 08:48:49.130645 3986 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" podUID="b0280499-8277-46f0-bd8c-058a47a99e19" Mar 18 08:48:49.144943 master-0 kubenswrapper[3986]: I0318 08:48:49.144880 3986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" podStartSLOduration=124.144846407 podStartE2EDuration="2m4.144846407s" podCreationTimestamp="2026-03-18 08:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:48.156765293 +0000 UTC m=+159.563935405" watchObservedRunningTime="2026-03-18 08:48:49.144846407 +0000 UTC m=+160.552016489" Mar 18 08:48:49.979112 master-0 kubenswrapper[3986]: I0318 08:48:49.979041 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:49.979112 master-0 kubenswrapper[3986]: I0318 08:48:49.979104 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: I0318 08:48:49.979134 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: I0318 08:48:49.979167 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: I0318 08:48:49.979188 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: I0318 08:48:49.979206 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: E0318 08:48:49.979220 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: E0318 08:48:49.979271 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: E0318 08:48:49.979276 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.97926027 +0000 UTC m=+165.386430352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: E0318 08:48:49.979295 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.979287561 +0000 UTC m=+165.386457643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: E0318 08:48:49.979326 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: E0318 08:48:49.979342 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.979337142 +0000 UTC m=+165.386507224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:48:49.979362 master-0 kubenswrapper[3986]: E0318 08:48:49.979373 3986 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:49.979920 master-0 kubenswrapper[3986]: E0318 08:48:49.979389 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.979383864 +0000 UTC m=+165.386553946 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:48:49.979920 master-0 kubenswrapper[3986]: E0318 08:48:49.979420 3986 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:49.979920 master-0 kubenswrapper[3986]: E0318 08:48:49.979435 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.979430045 +0000 UTC m=+165.386600127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:48:49.979920 master-0 kubenswrapper[3986]: E0318 08:48:49.979463 3986 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:49.979920 master-0 kubenswrapper[3986]: E0318 08:48:49.979479 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.979473276 +0000 UTC m=+165.386643358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:48:49.979920 master-0 kubenswrapper[3986]: E0318 08:48:49.979505 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:49.979920 master-0 kubenswrapper[3986]: E0318 08:48:49.979519 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.979514767 +0000 UTC m=+165.386684849 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:49.979920 master-0 kubenswrapper[3986]: I0318 08:48:49.979223 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:50.080912 master-0 kubenswrapper[3986]: I0318 08:48:50.080868 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:50.080912 master-0 kubenswrapper[3986]: I0318 08:48:50.080912 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:50.080912 master-0 kubenswrapper[3986]: I0318 08:48:50.080936 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: I0318 08:48:50.080957 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: E0318 08:48:50.080963 3986 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: E0318 08:48:50.081019 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:54.081004864 +0000 UTC m=+165.488174946 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: E0318 08:48:50.081042 3986 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: E0318 08:48:50.081075 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:48:54.081065636 +0000 UTC m=+165.488235718 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: E0318 08:48:50.081118 3986 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: E0318 08:48:50.081138 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:48:54.081130458 +0000 UTC m=+165.488300540 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: E0318 08:48:50.081180 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:50.081375 master-0 kubenswrapper[3986]: E0318 08:48:50.081201 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:54.08119511 +0000 UTC m=+165.488365192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:48:53.740359 master-0 kubenswrapper[3986]: I0318 08:48:53.739892 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:53.741368 master-0 kubenswrapper[3986]: E0318 08:48:53.740252 3986 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:53.741368 master-0 kubenswrapper[3986]: E0318 08:48:53.740552 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:57.740516004 +0000 UTC m=+229.147686126 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : secret "metrics-daemon-secret" not found Mar 18 08:48:54.043779 master-0 kubenswrapper[3986]: I0318 08:48:54.043629 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:54.043779 master-0 kubenswrapper[3986]: I0318 08:48:54.043687 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:54.044080 master-0 kubenswrapper[3986]: E0318 08:48:54.043916 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:54.044080 master-0 kubenswrapper[3986]: I0318 08:48:54.044017 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:54.044080 master-0 kubenswrapper[3986]: E0318 08:48:54.044061 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.044027995 +0000 UTC m=+173.451198087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:54.044080 master-0 kubenswrapper[3986]: E0318 08:48:54.044077 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:54.044284 master-0 kubenswrapper[3986]: I0318 08:48:54.044112 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:54.044284 master-0 kubenswrapper[3986]: E0318 08:48:54.044128 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:54.044284 master-0 kubenswrapper[3986]: E0318 08:48:54.044141 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.044123058 +0000 UTC m=+173.451293390 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:54.044398 master-0 kubenswrapper[3986]: I0318 08:48:54.044324 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:54.044398 master-0 kubenswrapper[3986]: E0318 08:48:54.044362 3986 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:54.044398 master-0 kubenswrapper[3986]: E0318 08:48:54.044391 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.044383646 +0000 UTC m=+173.451553728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:48:54.044398 master-0 kubenswrapper[3986]: E0318 08:48:54.044179 3986 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:54.044545 master-0 kubenswrapper[3986]: E0318 08:48:54.044421 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.044416147 +0000 UTC m=+173.451586229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:48:54.044545 master-0 kubenswrapper[3986]: I0318 08:48:54.044416 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:54.044545 master-0 kubenswrapper[3986]: I0318 08:48:54.044455 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:54.044545 master-0 kubenswrapper[3986]: E0318 08:48:54.044483 3986 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:54.044545 master-0 kubenswrapper[3986]: E0318 08:48:54.044506 3986 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:54.044545 master-0 kubenswrapper[3986]: E0318 08:48:54.044518 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.044508309 +0000 UTC m=+173.451678401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:48:54.044545 master-0 kubenswrapper[3986]: E0318 08:48:54.044540 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.0445294 +0000 UTC m=+173.451699492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:48:54.044989 master-0 kubenswrapper[3986]: E0318 08:48:54.044557 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.044550581 +0000 UTC m=+173.451720673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:54.145885 master-0 kubenswrapper[3986]: I0318 08:48:54.145653 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:54.145885 master-0 kubenswrapper[3986]: I0318 08:48:54.145724 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:54.145885 master-0 kubenswrapper[3986]: I0318 08:48:54.145764 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:54.145885 master-0 kubenswrapper[3986]: I0318 08:48:54.145805 3986 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:54.146300 master-0 kubenswrapper[3986]: E0318 08:48:54.145919 3986 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:54.146300 master-0 kubenswrapper[3986]: E0318 08:48:54.146002 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.145977325 +0000 UTC m=+173.553147417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:54.146300 master-0 kubenswrapper[3986]: E0318 08:48:54.146049 3986 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:54.146300 master-0 kubenswrapper[3986]: E0318 08:48:54.146192 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.146159351 +0000 UTC m=+173.553329443 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:48:54.146641 master-0 kubenswrapper[3986]: E0318 08:48:54.146306 3986 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:54.146641 master-0 kubenswrapper[3986]: E0318 08:48:54.146398 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.146371117 +0000 UTC m=+173.553541229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:48:54.146641 master-0 kubenswrapper[3986]: E0318 08:48:54.146512 3986 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:54.146641 master-0 kubenswrapper[3986]: E0318 08:48:54.146575 3986 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.146555623 +0000 UTC m=+173.553725955 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:48:55.857179 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 08:48:55.871640 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 08:48:55.872136 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 08:48:55.874768 master-0 systemd[1]: kubelet.service: Consumed 10.492s CPU time. Mar 18 08:48:55.897783 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 08:48:56.075476 master-0 kubenswrapper[7620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:48:56.075476 master-0 kubenswrapper[7620]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 08:48:56.075476 master-0 kubenswrapper[7620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:48:56.075476 master-0 kubenswrapper[7620]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:48:56.075476 master-0 kubenswrapper[7620]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 08:48:56.075476 master-0 kubenswrapper[7620]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:48:56.077358 master-0 kubenswrapper[7620]: I0318 08:48:56.075531 7620 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078277 7620 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078296 7620 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078301 7620 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078306 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078312 7620 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078316 7620 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078321 7620 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078326 7620 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:48:56.078312 master-0 kubenswrapper[7620]: W0318 08:48:56.078332 7620 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078338 7620 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078343 7620 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078347 7620 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078351 7620 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078355 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078360 7620 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078364 7620 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078368 7620 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078372 7620 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078377 7620 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078381 7620 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078385 7620 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078389 7620 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078395 7620 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078400 7620 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078410 7620 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078416 7620 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078420 7620 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078425 7620 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:48:56.078702 master-0 kubenswrapper[7620]: W0318 08:48:56.078429 7620 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078433 7620 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078437 7620 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078442 7620 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078446 7620 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078450 7620 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078455 7620 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078459 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078465 7620 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078470 7620 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078474 7620 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078479 7620 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078483 7620 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078487 7620 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078491 7620 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078497 7620 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078503 7620 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078507 7620 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078512 7620 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:48:56.079519 master-0 kubenswrapper[7620]: W0318 08:48:56.078517 7620 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078522 7620 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078526 7620 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078530 7620 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078535 7620 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078539 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078544 7620 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078549 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078553 7620 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078558 7620 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078564 7620 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078568 7620 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078572 7620 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078576 7620 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078581 7620 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078585 7620 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078589 7620 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078595 7620 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078599 7620 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:48:56.080317 master-0 kubenswrapper[7620]: W0318 08:48:56.078604 7620 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: W0318 08:48:56.078608 7620 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: W0318 08:48:56.078612 7620 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: W0318 08:48:56.078615 7620 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: W0318 08:48:56.078619 7620 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: W0318 08:48:56.078624 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078726 7620 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078736 7620 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078742 7620 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078748 7620 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078755 7620 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078761 7620 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078767 7620 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078773 7620 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078777 7620 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078782 7620 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078786 7620 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078791 7620 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078795 7620 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078799 7620 flags.go:64] FLAG: --cgroup-root="" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078804 7620 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078808 7620 flags.go:64] FLAG: --client-ca-file="" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078813 7620 flags.go:64] FLAG: --cloud-config="" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078819 7620 flags.go:64] FLAG: --cloud-provider="" Mar 18 08:48:56.081255 master-0 kubenswrapper[7620]: I0318 08:48:56.078824 7620 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078830 7620 flags.go:64] FLAG: --cluster-domain="" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078834 7620 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078839 7620 flags.go:64] FLAG: --config-dir="" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078843 7620 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078864 7620 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078871 7620 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078875 7620 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078881 7620 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078885 7620 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078890 7620 flags.go:64] FLAG: --contention-profiling="false" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078894 7620 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078898 7620 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078903 7620 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078908 7620 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078914 7620 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078918 7620 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078923 7620 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078928 7620 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078932 7620 flags.go:64] FLAG: --enable-server="true" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078937 7620 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078942 7620 flags.go:64] FLAG: --event-burst="100" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078947 7620 flags.go:64] FLAG: --event-qps="50" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078951 7620 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078955 7620 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 08:48:56.082463 master-0 kubenswrapper[7620]: I0318 08:48:56.078959 7620 flags.go:64] FLAG: --eviction-hard="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.078965 7620 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.078969 7620 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.078973 7620 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.078980 7620 flags.go:64] FLAG: --eviction-soft="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.078984 7620 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.078988 7620 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.078992 7620 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.078997 7620 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079001 7620 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079005 7620 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079010 7620 flags.go:64] FLAG: --feature-gates="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079015 7620 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079020 7620 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079024 7620 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079029 7620 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079033 7620 flags.go:64] FLAG: --healthz-port="10248" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079038 7620 flags.go:64] FLAG: --help="false" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079042 7620 flags.go:64] FLAG: --hostname-override="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079047 7620 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079051 7620 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079055 7620 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079060 7620 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079065 7620 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079070 7620 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 08:48:56.083558 master-0 kubenswrapper[7620]: I0318 08:48:56.079076 7620 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079082 7620 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079088 7620 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079093 7620 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079098 7620 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079103 7620 flags.go:64] FLAG: --kube-reserved="" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079107 7620 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079111 7620 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079115 7620 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079119 7620 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079124 7620 flags.go:64] FLAG: --lock-file="" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079128 7620 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079132 7620 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079136 7620 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079143 7620 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079148 7620 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079153 7620 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079157 7620 flags.go:64] FLAG: --logging-format="text" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079161 7620 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079166 7620 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079181 7620 flags.go:64] FLAG: --manifest-url="" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079185 7620 flags.go:64] FLAG: --manifest-url-header="" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079191 7620 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079196 7620 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079201 7620 flags.go:64] FLAG: --max-pods="110" Mar 18 08:48:56.084608 master-0 kubenswrapper[7620]: I0318 08:48:56.079206 7620 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079210 7620 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079215 7620 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079219 7620 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079223 7620 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079227 7620 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079232 7620 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079243 7620 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079247 7620 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079252 7620 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079256 7620 flags.go:64] FLAG: --pod-cidr="" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079261 7620 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079268 7620 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079272 7620 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079277 7620 flags.go:64] FLAG: --pods-per-core="0" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079283 7620 flags.go:64] FLAG: --port="10250" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079288 7620 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079292 7620 flags.go:64] FLAG: --provider-id="" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079296 7620 flags.go:64] FLAG: --qos-reserved="" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079300 7620 flags.go:64] FLAG: --read-only-port="10255" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079304 7620 flags.go:64] FLAG: --register-node="true" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079309 7620 flags.go:64] FLAG: --register-schedulable="true" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079313 7620 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 08:48:56.085889 master-0 kubenswrapper[7620]: I0318 08:48:56.079322 7620 flags.go:64] FLAG: --registry-burst="10" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079327 7620 flags.go:64] FLAG: --registry-qps="5" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079331 7620 flags.go:64] FLAG: --reserved-cpus="" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079335 7620 flags.go:64] FLAG: --reserved-memory="" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079341 7620 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079347 7620 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079352 7620 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079356 7620 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079361 7620 flags.go:64] FLAG: --runonce="false" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079366 7620 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079370 7620 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079375 7620 flags.go:64] FLAG: --seccomp-default="false" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079379 7620 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079383 7620 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079389 7620 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079394 7620 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079399 7620 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079403 7620 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079408 7620 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079412 7620 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079417 7620 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079422 7620 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079426 7620 flags.go:64] FLAG: --system-cgroups="" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079430 7620 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079438 7620 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 08:48:56.087331 master-0 kubenswrapper[7620]: I0318 08:48:56.079443 7620 flags.go:64] FLAG: --tls-cert-file="" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079448 7620 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079454 7620 flags.go:64] FLAG: --tls-min-version="" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079460 7620 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079465 7620 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079472 7620 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079477 7620 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079482 7620 flags.go:64] FLAG: --v="2" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079489 7620 flags.go:64] FLAG: --version="false" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079496 7620 flags.go:64] FLAG: --vmodule="" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079503 7620 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: I0318 08:48:56.079509 7620 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079643 7620 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079650 7620 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079655 7620 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079659 7620 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079663 7620 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079667 7620 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079671 7620 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079675 7620 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079679 7620 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079683 7620 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079688 7620 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:48:56.088636 master-0 kubenswrapper[7620]: W0318 08:48:56.079692 7620 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079697 7620 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079701 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079705 7620 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079708 7620 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079713 7620 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079718 7620 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079722 7620 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079726 7620 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079731 7620 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079735 7620 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079738 7620 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079742 7620 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079746 7620 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079752 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079755 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079761 7620 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079766 7620 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079771 7620 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:48:56.089833 master-0 kubenswrapper[7620]: W0318 08:48:56.079774 7620 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079779 7620 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079784 7620 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079789 7620 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079793 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079797 7620 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079801 7620 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079805 7620 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079808 7620 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079812 7620 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079816 7620 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079820 7620 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079823 7620 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079828 7620 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079831 7620 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079836 7620 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079840 7620 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079844 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079863 7620 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079868 7620 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:48:56.090810 master-0 kubenswrapper[7620]: W0318 08:48:56.079873 7620 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079877 7620 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079881 7620 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079884 7620 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079888 7620 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079892 7620 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079896 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079902 7620 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079908 7620 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079913 7620 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079917 7620 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079921 7620 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079925 7620 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079930 7620 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079934 7620 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079939 7620 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079943 7620 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079947 7620 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079950 7620 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079954 7620 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:48:56.091714 master-0 kubenswrapper[7620]: W0318 08:48:56.079958 7620 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.079961 7620 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: I0318 08:48:56.079976 7620 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: I0318 08:48:56.087997 7620 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: I0318 08:48:56.088046 7620 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088146 7620 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088154 7620 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088161 7620 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088168 7620 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088184 7620 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088191 7620 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088197 7620 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088203 7620 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088209 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088218 7620 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:48:56.092980 master-0 kubenswrapper[7620]: W0318 08:48:56.088224 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088229 7620 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088234 7620 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088240 7620 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088246 7620 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088251 7620 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088256 7620 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088260 7620 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088264 7620 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088270 7620 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088275 7620 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088280 7620 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088284 7620 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088288 7620 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088292 7620 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088296 7620 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088303 7620 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088314 7620 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088321 7620 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088326 7620 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:48:56.093805 master-0 kubenswrapper[7620]: W0318 08:48:56.088330 7620 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088335 7620 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088341 7620 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088346 7620 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088353 7620 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088358 7620 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088363 7620 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088368 7620 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088372 7620 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088377 7620 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088382 7620 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088387 7620 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088392 7620 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088397 7620 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088402 7620 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088406 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088411 7620 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088417 7620 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088421 7620 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088426 7620 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:48:56.096633 master-0 kubenswrapper[7620]: W0318 08:48:56.088431 7620 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088435 7620 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088440 7620 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088445 7620 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088450 7620 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088455 7620 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088459 7620 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088464 7620 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088469 7620 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088474 7620 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088479 7620 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088484 7620 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088488 7620 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088493 7620 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088500 7620 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088505 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088511 7620 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088516 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088521 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088526 7620 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:48:56.099371 master-0 kubenswrapper[7620]: W0318 08:48:56.088531 7620 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088536 7620 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: I0318 08:48:56.088544 7620 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088785 7620 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088799 7620 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088804 7620 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088809 7620 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088813 7620 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088816 7620 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088823 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088827 7620 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088831 7620 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088835 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088839 7620 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088843 7620 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:48:56.100812 master-0 kubenswrapper[7620]: W0318 08:48:56.088864 7620 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088871 7620 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088893 7620 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088898 7620 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088902 7620 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088907 7620 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088912 7620 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088917 7620 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088921 7620 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088925 7620 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088930 7620 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088933 7620 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088937 7620 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088941 7620 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088945 7620 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088949 7620 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088953 7620 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088957 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088961 7620 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:48:56.101909 master-0 kubenswrapper[7620]: W0318 08:48:56.088965 7620 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.088970 7620 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.088975 7620 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.088980 7620 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.088984 7620 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.088988 7620 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.088992 7620 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.088997 7620 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089002 7620 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089007 7620 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089011 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089015 7620 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089019 7620 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089024 7620 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089028 7620 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089032 7620 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089036 7620 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089040 7620 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089044 7620 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089048 7620 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:48:56.102923 master-0 kubenswrapper[7620]: W0318 08:48:56.089052 7620 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089056 7620 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089060 7620 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089063 7620 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089067 7620 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089071 7620 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089075 7620 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089079 7620 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089083 7620 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089087 7620 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089097 7620 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089101 7620 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089105 7620 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089108 7620 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089112 7620 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089117 7620 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089122 7620 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089126 7620 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089132 7620 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089136 7620 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:48:56.103680 master-0 kubenswrapper[7620]: W0318 08:48:56.089140 7620 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.089146 7620 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.089424 7620 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.091303 7620 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.091401 7620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.091684 7620 server.go:997] "Starting client certificate rotation" Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.091699 7620 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.092437 7620 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 08:38:09 +0000 UTC, rotation deadline is 2026-03-19 02:54:29.756386488 +0000 UTC Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.092522 7620 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h5m33.663867026s for next certificate rotation Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.092954 7620 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.095258 7620 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.100909 7620 log.go:25] "Validated CRI v1 runtime API" Mar 18 08:48:56.105914 master-0 kubenswrapper[7620]: I0318 08:48:56.104165 7620 log.go:25] "Validated CRI v1 image API" Mar 18 08:48:56.107138 master-0 kubenswrapper[7620]: I0318 08:48:56.106206 7620 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 08:48:56.113958 master-0 kubenswrapper[7620]: I0318 08:48:56.113743 7620 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 9d22b218-6091-4693-b191-06a05a0aba6f:/dev/vda3] Mar 18 08:48:56.114328 master-0 kubenswrapper[7620]: I0318 08:48:56.113810 7620 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/00b7669c60621e059b9f2a3185ba93db56934e35fa8fa0713c09f3decdea9378/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/00b7669c60621e059b9f2a3185ba93db56934e35fa8fa0713c09f3decdea9378/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb/userdata/shm major:0 minor:116 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/15b9cae2d28df4fa59242b209b16efd412d30453ba1d9f0bfc42c07c896efdb2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/15b9cae2d28df4fa59242b209b16efd412d30453ba1d9f0bfc42c07c896efdb2/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/23865ef5bfea471643359580ecae55517bf670fdb3b8b05c871c139fe34b55d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/23865ef5bfea471643359580ecae55517bf670fdb3b8b05c871c139fe34b55d5/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e229ef6f57fea8e5406ee6259b2efa0f8a16c288c8a29c71c1e32c057bf84d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e229ef6f57fea8e5406ee6259b2efa0f8a16c288c8a29c71c1e32c057bf84d0/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1/userdata/shm major:0 minor:246 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4d17f4a7fe14a2a472c626baa31e2712ee04373a3644e0529ddf244e8afaa854/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4d17f4a7fe14a2a472c626baa31e2712ee04373a3644e0529ddf244e8afaa854/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/837527d2f9f7319ea14fc20367ef17853e00cc20e938fc1184f891aa57296deb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/837527d2f9f7319ea14fc20367ef17853e00cc20e938fc1184f891aa57296deb/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b42865dcd2dae3a2390972bbf267cd467643023a4c8d222016e0b44a61943afc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b42865dcd2dae3a2390972bbf267cd467643023a4c8d222016e0b44a61943afc/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c10d1b81b0a7054da8fb12459aa720b7916f5484be5a832bdacdc31fad36d2cc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c10d1b81b0a7054da8fb12459aa720b7916f5484be5a832bdacdc31fad36d2cc/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c1e8680fcd730f22fac4464d7e2e919f0d68259c2072f7e2c075736c7c9f888d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c1e8680fcd730f22fac4464d7e2e919f0d68259c2072f7e2c075736c7c9f888d/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185/userdata/shm major:0 minor:243 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1fbd15a6f55efb9df34e794516a926fbd6cd9758a5312e86f1eb743de9e13b5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1fbd15a6f55efb9df34e794516a926fbd6cd9758a5312e86f1eb743de9e13b5/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~projected/kube-api-access-5ngk7:{mountpoint:/var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~projected/kube-api-access-5ngk7 major:0 minor:103 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~secret/metrics-tls major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/159a26f5-3cfc-4db2-88e9-bff5d8a613fc/volumes/kubernetes.io~projected/kube-api-access-9hxtz:{mountpoint:/var/lib/kubelet/pods/159a26f5-3cfc-4db2-88e9-bff5d8a613fc/volumes/kubernetes.io~projected/kube-api-access-9hxtz major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~projected/kube-api-access-x9w7l:{mountpoint:/var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~projected/kube-api-access-x9w7l major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~secret/webhook-cert major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~projected/kube-api-access-cj9fr:{mountpoint:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~projected/kube-api-access-cj9fr major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~projected/kube-api-access-2msp8:{mountpoint:/var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~projected/kube-api-access-2msp8 major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d0b7f60-c32e-48a6-b9e9-87c8f018367d/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3d0b7f60-c32e-48a6-b9e9-87c8f018367d/volumes/kubernetes.io~projected/kube-api-access major:0 minor:104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~projected/kube-api-access-4hn9w:{mountpoint:/var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~projected/kube-api-access-4hn9w major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~projected/kube-api-access-n959l:{mountpoint:/var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~projected/kube-api-access-n959l major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~projected/kube-api-access major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~projected/kube-api-access-mlp7w:{mountpoint:/var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~projected/kube-api-access-mlp7w major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6fb1f871-9c24-48a1-a15a-a636b5bb687d/volumes/kubernetes.io~projected/kube-api-access-wxxcn:{mountpoint:/var/lib/kubelet/pods/6fb1f871-9c24-48a1-a15a-a636b5bb687d/volumes/kubernetes.io~projected/kube-api-access-wxxcn major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~projected/kube-api-access-dfjmx:{mountpoint:/var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~projected/kube-api-access-dfjmx major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/kube-api-access-47p9x:{mountpoint:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/kube-api-access-47p9x major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/866c259c-7661-4a80-873b-6fd625218665/volumes/kubernetes.io~projected/kube-api-access-ftdvp:{mountpoint:/var/lib/kubelet/pods/866c259c-7661-4a80-873b-6fd625218665/volumes/kubernetes.io~projected/kube-api-access-ftdvp major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~projected/kube-api-access major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~projected/kube-api-access-8w58l:{mountpoint:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~projected/kube-api-access-8w58l major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/etcd-client major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/kube-api-access-tk9jq:{mountpoint:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/kube-api-access-tk9jq major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~projected/kube-api-access-dxvk7:{mountpoint:/var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~projected/kube-api-access-dxvk7 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~projected/kube-api-access-pz26d:{mountpoint:/var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~projected/kube-api-access-pz26d major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~projected/kube-api-access-8lsw9:{mountpoint:/var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~projected/kube-api-access-8lsw9 major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~projected/kube-api-access-lw27k:{mountpoint:/var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~projected/kube-api-access-lw27k major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~projected/kube-api-access-x6zq8:{mountpoint:/var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~projected/kube-api-access-x6zq8 major:0 minor:120 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e025d334-20e7-491f-8027-194251398747/volumes/kubernetes.io~projected/kube-api-access-bfzdk:{mountpoint:/var/lib/kubelet/pods/e025d334-20e7-491f-8027-194251398747/volumes/kubernetes.io~projected/kube-api-access-bfzdk major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~projected/kube-api-access-vfjgn:{mountpoint:/var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~projected/kube-api-access-vfjgn major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~projected/kube-api-access-dwrdc:{mountpoint:/var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~projected/kube-api-access-dwrdc major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~projected/kube-api-access-svdhs:{mountpoint:/var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~projected/kube-api-access-svdhs major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~projected/kube-api-access-glt6c:{mountpoint:/var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~projected/kube-api-access-glt6c major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f9fa104a-4979-4023-8d7e-a965f11bc7db/volumes/kubernetes.io~projected/kube-api-access-jlwg9:{mountpoint:/var/lib/kubelet/pods/f9fa104a-4979-4023-8d7e-a965f11bc7db/volumes/kubernetes.io~projected/kube-api-access-jlwg9 major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~projected/kube-api-access-s8prf:{mountpoint:/var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~projected/kube-api-access-s8prf major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4/volumes/kubernetes.io~projected/kube-api-access-hpl2c:{mountpoint:/var/lib/kubelet/pods/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4/volumes/kubernetes.io~projected/kube-api-access-hpl2c major:0 minor:102 fsType:tmpfs blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/51511e2c4ec6c149fafad6b6fdf93f73f3c58315ba5efd70699693456e4413d3/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/0151f629f33c98f0a9d7a41bb76936694c3784833de895e045717f4d9575bcbe/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/0d98d45ab6c4bed90053f2dc65090b08315a92bdd89979de2313ada39f55ac7a/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/d3ad0a186c7f05b163738a7eb9dc7c09a90cd547d038fa0b70034e1cd2072517/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/51dccf71c3a5959a7d3a9538de0b44cee3f9ffc7d40e7273b44498fd8635150c/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/c25848b6c5cb7d3ce35d01a03a5e51d1d2f15b5d996ef20673c6372ff6044e30/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/84fc2447ba494ba661a1b8d790db3f8c92dd408051f3d9197dd9d4b23279567e/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/06f94584812daf1238daca4aa49fcdb97f07104c9857082398e682a7e4cf2852/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/5d56da972d85bd0fdae711451e4093b7fb4ea8e9a5d5991d0a8e9c0b7661260d/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/1ccc57dcf734702732eee7f984985aeb4b41d07af9908836e7bd004973b11cb8/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/8effcc5a0ecc570edf5dd7aff2fd992678f59c6af1ac6aca095aafa17059c235/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/c9fc264dea0ddc4568964b1e03b13cf4e5d3df2c83c9609430cb661e4f5193fd/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/7c5901030f143593f23dbe0efbd940aaf1bc4f314264c168185355f94f532dfc/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-163:{mountpoint:/var/lib/containers/storage/overlay/94743b1d380fec96a3bddaecaec99ad854a67e9bab5347b7a6c6cb88395f744f/merged major:0 minor:163 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/0f125902d2a837b357d6626dd8e0ee59e98115855c2bb6652ba34f5f9bb20bfa/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/ad856f305e0ffc19e4b561dc6e8d714ce91d5fbd1346a74b5182e66e1d29cc4e/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/155dae4b1e369ccde04cb6a3e67c97deacb81dd588369d60dd0ac710c6c016f4/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/e9db5fc28ca52751496c19a6eb6ef9e8b659d240e872e129839abf9d752756f5/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/a66dee84642dd839a1d836d4418b0a45e590b1f167539c52e7f009b0dcd35aa5/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/c75f0faa325606ff2195997ef58781cc8d9ee1077ab76d4d27b9cc747ac3b260/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/196f9c252a14daac552f0688cea6e8de155f5725fa7dc417dff39a19e624a546/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-264:{mountpoint:/var/lib/containers/storage/overlay/371a7eddc835a3c39fdf2654c6be820f5a9b8b189e88321527103397fe6a5fab/merged major:0 minor:264 fsType:overlay blockSize:0} overlay_0-271:{mountpoint:/var/lib/containers/storage/overlay/3e28e0cde913974aed071217113e56834df48e7872e42090eb288fc3f1bb09fb/merged major:0 minor:271 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/ad30cac5a6bdb8411a1e3c5045136d6883bbdeeca510307fbb55b70e4d808f27/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/7a5ed6a4cd0a18a67a910833696f7ab3b1a705e0a09d66172ba94e3504544b05/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/27e4a3a239e25d6d8a1357f2335023a4fc238592ce32177f3b24e37145145ef3/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/2a4dff9bcd9592e74f08f37d55bdd055e8109852ce6a0ab247e730c3169660f5/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/bb8b29fcfc8b61164baa54562363f2548396770d4622d6a9498c700c57ca4129/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/cce34a6c18fe1e0ed19d292c51aa78a39dfac91108ea98739fd02042770383f0/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/af71e9b80b157f9a2b206e3a2f29d4053bf311cffc5790a280042bf049041374/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/e6f4de0fb909434a273de4322398741c7f0779eef9066140b4f07742eff10976/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/abecdcd1e05363f2a3fe4f73b1e8e46d5e02439c4cd86626fafe7ce3afa96797/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/fd1718655fea47539b218e27cf58a997566a62d1035078d878b75d54e9fc45a6/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/c171dd71fa8497a454ab6e4967d9d06c2476e449be4d4333b244ddc9fc9f8c44/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/153371f9dbef15221c909ebf61296d40c94b927eb169f643a2337cec895aaa7b/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/9ee90cdc8d23ee4f65522455c1bd3a01c50cb1333db2566797b589e95624ddd0/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/cd133f9c449c766b3bb6200179486ae5158558787c9635afe5d05829b8a76783/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/97fd12285f9648dbf754449fb4f5752b32745522b3f269b1f75750fac50e6954/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/f9ceb86f18b20b7310df00f301a7240c3b34f1e2e3f7be5854824cb741c5d96a/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/02030c2324b999e08ce5066d2b6ac7623d7b666f22b63c9a4d4e0bb71cfb9b65/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/7f775267ee686a21a47ddb2503bf3d530d77890c19ba8e48e9b1a8e6fbfd5ef2/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/2d63a71d2e5f12cd95e95e2a3aa0d37ce555fcb034bbd559f5d0e638efc7fdd0/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/14e4873bb1b96d33177aa84eb388302ae3208d525926a2432055177d88430504/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/0a5d57a73839589243630310df6f2330198e1a63c73c832c6c98b731e4f8a3f5/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/11c05d1bf7e14cb3fa0e076810c1d8cb22ad46acf72da394f632854514521d94/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/ee7798df358dbd6c3744a4e539ad9ee6fd2583390029e4434e9cdb6bb6ddb392/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/0cf4bb94735ac378196ef77b5b2b61a7a832a3b9f0f7ad0776d57c2ee4c6fb04/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/e8d0f2e988ec9aa246cb50ae31217c5924f00a1a6668598d318fbd14882bef3f/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 18 08:48:56.147888 master-0 kubenswrapper[7620]: I0318 08:48:56.147150 7620 manager.go:217] Machine: {Timestamp:2026-03-18 08:48:56.146014587 +0000 UTC m=+0.140796359 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:462ae4bbdf8a4211a5b04e094f4702bb SystemUUID:462ae4bb-df8a-4211-a5b0-4e094f4702bb BootID:8f184f3d-61e6-4234-a551-2580e849051e Filesystems:[{Device:/run/containers/storage/overlay-containers/c1e8680fcd730f22fac4464d7e2e919f0d68259c2072f7e2c075736c7c9f888d/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-264 DeviceMajor:0 DeviceMinor:264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~projected/kube-api-access-8lsw9 DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-163 DeviceMajor:0 DeviceMinor:163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1fbd15a6f55efb9df34e794516a926fbd6cd9758a5312e86f1eb743de9e13b5/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6fb1f871-9c24-48a1-a15a-a636b5bb687d/volumes/kubernetes.io~projected/kube-api-access-wxxcn DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~projected/kube-api-access-5ngk7 DeviceMajor:0 DeviceMinor:103 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~projected/kube-api-access-x6zq8 DeviceMajor:0 DeviceMinor:120 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/00b7669c60621e059b9f2a3185ba93db56934e35fa8fa0713c09f3decdea9378/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~projected/kube-api-access-mlp7w DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~projected/kube-api-access-4hn9w DeviceMajor:0 DeviceMinor:245 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e229ef6f57fea8e5406ee6259b2efa0f8a16c288c8a29c71c1e32c057bf84d0/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4d17f4a7fe14a2a472c626baa31e2712ee04373a3644e0529ddf244e8afaa854/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~projected/kube-api-access-cj9fr DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/kube-api-access-47p9x DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/159a26f5-3cfc-4db2-88e9-bff5d8a613fc/volumes/kubernetes.io~projected/kube-api-access-9hxtz DeviceMajor:0 DeviceMinor:263 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-271 DeviceMajor:0 DeviceMinor:271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:136 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~projected/kube-api-access-pz26d DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~projected/kube-api-access-svdhs DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/837527d2f9f7319ea14fc20367ef17853e00cc20e938fc1184f891aa57296deb/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~projected/kube-api-access-dxvk7 DeviceMajor:0 DeviceMinor:262 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/23865ef5bfea471643359580ecae55517bf670fdb3b8b05c871c139fe34b55d5/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/15b9cae2d28df4fa59242b209b16efd412d30453ba1d9f0bfc42c07c896efdb2/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~projected/kube-api-access-2msp8 DeviceMajor:0 DeviceMinor:253 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~projected/kube-api-access-lw27k DeviceMajor:0 DeviceMinor:256 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185/userdata/shm DeviceMajor:0 DeviceMinor:243 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/3d0b7f60-c32e-48a6-b9e9-87c8f018367d/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:104 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/kube-api-access-tk9jq DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1/userdata/shm DeviceMajor:0 DeviceMinor:246 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b42865dcd2dae3a2390972bbf267cd467643023a4c8d222016e0b44a61943afc/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~projected/kube-api-access-dfjmx DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:252 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/866c259c-7661-4a80-873b-6fd625218665/volumes/kubernetes.io~projected/kube-api-access-ftdvp DeviceMajor:0 DeviceMinor:266 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f9fa104a-4979-4023-8d7e-a965f11bc7db/volumes/kubernetes.io~projected/kube-api-access-jlwg9 DeviceMajor:0 DeviceMinor:115 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~projected/kube-api-access-dwrdc DeviceMajor:0 DeviceMinor:255 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4/volumes/kubernetes.io~projected/kube-api-access-hpl2c DeviceMajor:0 DeviceMinor:102 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~projected/kube-api-access-8w58l DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:98 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~projected/kube-api-access-vfjgn DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c10d1b81b0a7054da8fb12459aa720b7916f5484be5a832bdacdc31fad36d2cc/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~projected/kube-api-access-x9w7l DeviceMajor:0 DeviceMinor:137 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~projected/kube-api-access-n959l DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~projected/kube-api-access-glt6c DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb/userdata/shm DeviceMajor:0 DeviceMinor:116 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~projected/kube-api-access-s8prf DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e025d334-20e7-491f-8027-194251398747/volumes/kubernetes.io~projected/kube-api-access-bfzdk DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:15b9cae2d28df4f MacAddress:0e:70:cb:72:3c:9b Speed:10000 Mtu:8900} {Name:17e72118bc9a21c MacAddress:4a:ba:a5:70:a5:2e Speed:10000 Mtu:8900} {Name:23865ef5bfea471 MacAddress:8a:75:02:d0:2e:2c Speed:10000 Mtu:8900} {Name:2e229ef6f57fea8 MacAddress:4a:10:80:82:2f:ae Speed:10000 Mtu:8900} {Name:301f04aeb1003f5 MacAddress:1e:8b:2c:0f:33:12 Speed:10000 Mtu:8900} {Name:7b07e88ac1eb70e MacAddress:56:39:e8:6f:8e:d7 Speed:10000 Mtu:8900} {Name:837527d2f9f7319 MacAddress:32:24:7e:2a:79:57 Speed:10000 Mtu:8900} {Name:a058ca3e613163c MacAddress:e2:d5:2d:04:db:05 Speed:10000 Mtu:8900} {Name:b42865dcd2dae3a MacAddress:7e:5e:62:83:dc:bc Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:ea:35:c0:05:b7:9e Speed:0 Mtu:8900} {Name:dea41e38002f15e MacAddress:aa:60:50:bf:12:e2 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:cd:49:09 Speed:-1 Mtu:9000} {Name:f1fbd15a6f55efb MacAddress:2a:63:41:85:8e:2a Speed:10000 Mtu:8900} {Name:fef2da050284c5b MacAddress:9e:bb:19:48:d5:92 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:76:d1:4e:31:92:01 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 08:48:56.147888 master-0 kubenswrapper[7620]: I0318 08:48:56.147838 7620 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 08:48:56.148415 master-0 kubenswrapper[7620]: I0318 08:48:56.148059 7620 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 08:48:56.148415 master-0 kubenswrapper[7620]: I0318 08:48:56.148337 7620 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 08:48:56.148584 master-0 kubenswrapper[7620]: I0318 08:48:56.148505 7620 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 08:48:56.148829 master-0 kubenswrapper[7620]: I0318 08:48:56.148571 7620 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 08:48:56.149025 master-0 kubenswrapper[7620]: I0318 08:48:56.148845 7620 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 08:48:56.149025 master-0 kubenswrapper[7620]: I0318 08:48:56.148874 7620 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 08:48:56.149025 master-0 kubenswrapper[7620]: I0318 08:48:56.148887 7620 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 08:48:56.149025 master-0 kubenswrapper[7620]: I0318 08:48:56.148930 7620 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 08:48:56.149246 master-0 kubenswrapper[7620]: I0318 08:48:56.149213 7620 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:48:56.149336 master-0 kubenswrapper[7620]: I0318 08:48:56.149317 7620 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 08:48:56.149430 master-0 kubenswrapper[7620]: I0318 08:48:56.149405 7620 kubelet.go:418] "Attempting to sync node with API server" Mar 18 08:48:56.149430 master-0 kubenswrapper[7620]: I0318 08:48:56.149426 7620 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 08:48:56.149549 master-0 kubenswrapper[7620]: I0318 08:48:56.149446 7620 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 08:48:56.149549 master-0 kubenswrapper[7620]: I0318 08:48:56.149463 7620 kubelet.go:324] "Adding apiserver pod source" Mar 18 08:48:56.149549 master-0 kubenswrapper[7620]: I0318 08:48:56.149487 7620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.151407 7620 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.151649 7620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152131 7620 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152311 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152341 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152355 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152369 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152382 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152396 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152408 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152420 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152435 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152449 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152467 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152490 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 08:48:56.152892 master-0 kubenswrapper[7620]: I0318 08:48:56.152543 7620 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 08:48:56.153513 master-0 kubenswrapper[7620]: I0318 08:48:56.153141 7620 server.go:1280] "Started kubelet" Mar 18 08:48:56.153513 master-0 kubenswrapper[7620]: I0318 08:48:56.153418 7620 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 08:48:56.153611 master-0 kubenswrapper[7620]: I0318 08:48:56.153455 7620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 08:48:56.153611 master-0 kubenswrapper[7620]: I0318 08:48:56.153568 7620 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 08:48:56.154888 master-0 kubenswrapper[7620]: I0318 08:48:56.154527 7620 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 08:48:56.154959 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 08:48:56.156197 master-0 kubenswrapper[7620]: I0318 08:48:56.156152 7620 server.go:449] "Adding debug handlers to kubelet server" Mar 18 08:48:56.171061 master-0 kubenswrapper[7620]: I0318 08:48:56.171014 7620 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 08:48:56.172069 master-0 kubenswrapper[7620]: I0318 08:48:56.171943 7620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 08:48:56.172217 master-0 kubenswrapper[7620]: I0318 08:48:56.172045 7620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 08:48:56.172300 master-0 kubenswrapper[7620]: I0318 08:48:56.172204 7620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 08:38:09 +0000 UTC, rotation deadline is 2026-03-19 04:44:56.681753235 +0000 UTC Mar 18 08:48:56.172300 master-0 kubenswrapper[7620]: I0318 08:48:56.172249 7620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h56m0.509507815s for next certificate rotation Mar 18 08:48:56.172939 master-0 kubenswrapper[7620]: I0318 08:48:56.172890 7620 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 08:48:56.172939 master-0 kubenswrapper[7620]: I0318 08:48:56.172920 7620 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 08:48:56.173171 master-0 kubenswrapper[7620]: I0318 08:48:56.173146 7620 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 08:48:56.173234 master-0 kubenswrapper[7620]: E0318 08:48:56.173192 7620 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:48:56.178642 master-0 kubenswrapper[7620]: I0318 08:48:56.178615 7620 factory.go:55] Registering systemd factory Mar 18 08:48:56.178642 master-0 kubenswrapper[7620]: I0318 08:48:56.178645 7620 factory.go:221] Registration of the systemd container factory successfully Mar 18 08:48:56.179293 master-0 kubenswrapper[7620]: I0318 08:48:56.179257 7620 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 08:48:56.179293 master-0 kubenswrapper[7620]: I0318 08:48:56.179278 7620 factory.go:153] Registering CRI-O factory Mar 18 08:48:56.179421 master-0 kubenswrapper[7620]: I0318 08:48:56.179326 7620 factory.go:221] Registration of the crio container factory successfully Mar 18 08:48:56.179472 master-0 kubenswrapper[7620]: I0318 08:48:56.179462 7620 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 08:48:56.179520 master-0 kubenswrapper[7620]: I0318 08:48:56.179496 7620 factory.go:103] Registering Raw factory Mar 18 08:48:56.179561 master-0 kubenswrapper[7620]: I0318 08:48:56.179524 7620 manager.go:1196] Started watching for new ooms in manager Mar 18 08:48:56.180316 master-0 kubenswrapper[7620]: I0318 08:48:56.180275 7620 manager.go:319] Starting recovery of all containers Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.185985 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186064 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7962fb40-1170-4c00-b1bf-92966aeae807" volumeName="kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186091 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a6ab2be-d018-4fd5-bfbb-6b88aec28663" volumeName="kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186114 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186128 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" volumeName="kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186148 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2ade7e6-cecd-4e98-8f85-ea8219303d75" volumeName="kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186165 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf89a76-7a94-46d3-853e-68e986563764" volumeName="kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186186 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" volumeName="kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186210 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="573d3a02-e395-4816-963a-cd614ef53f75" volumeName="kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186263 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186285 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0280499-8277-46f0-bd8c-058a47a99e19" volumeName="kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186307 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b065df33-7911-456e-b3a2-1f8c8d53e053" volumeName="kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186331 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a" volumeName="kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186384 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d633c5-e0aa-4fb6-83e0-a2e976334406" volumeName="kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186401 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="260c8aa5-a288-4ee8-b671-f97e90a2f39c" volumeName="kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186416 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d0b7f60-c32e-48a6-b9e9-87c8f018367d" volumeName="kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186429 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186444 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186456 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a" volumeName="kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186470 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186483 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186497 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7b72267-fc08-41ed-a92b-9fca7372aba6" volumeName="kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186512 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9fa104a-4979-4023-8d7e-a965f11bc7db" volumeName="kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186525 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf89a76-7a94-46d3-853e-68e986563764" volumeName="kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186539 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186556 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d9fe248-ba87-47e3-911a-1b2b112b5683" volumeName="kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186573 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5982111d-f4c6-4335-9b40-3142758fc2bc" volumeName="kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186587 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5982111d-f4c6-4335-9b40-3142758fc2bc" volumeName="kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186603 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0280499-8277-46f0-bd8c-058a47a99e19" volumeName="kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186620 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7b72267-fc08-41ed-a92b-9fca7372aba6" volumeName="kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186631 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec11012b-536a-422f-afc4-d2d0fd4b67fb" volumeName="kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186653 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186664 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2ade7e6-cecd-4e98-8f85-ea8219303d75" volumeName="kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186674 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf89a76-7a94-46d3-853e-68e986563764" volumeName="kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186707 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="772bc250-2e57-4ce0-883c-d44281fcb0be" volumeName="kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186720 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="772bc250-2e57-4ce0-883c-d44281fcb0be" volumeName="kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186730 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7962fb40-1170-4c00-b1bf-92966aeae807" volumeName="kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186739 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186748 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a6ab2be-d018-4fd5-bfbb-6b88aec28663" volumeName="kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186760 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec11012b-536a-422f-afc4-d2d0fd4b67fb" volumeName="kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186752 7620 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.186771 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edc7f629-4288-443b-aa8e-78bc6a09c848" volumeName="kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187123 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edc7f629-4288-443b-aa8e-78bc6a09c848" volumeName="kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187164 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9fa104a-4979-4023-8d7e-a965f11bc7db" volumeName="kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187186 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07a4fd92-0fd1-4688-b2db-de615d75971e" volumeName="kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187204 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187220 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6fb1f871-9c24-48a1-a15a-a636b5bb687d" volumeName="kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187236 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="772bc250-2e57-4ce0-883c-d44281fcb0be" volumeName="kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187250 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="866c259c-7661-4a80-873b-6fd625218665" volumeName="kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187268 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edc7f629-4288-443b-aa8e-78bc6a09c848" volumeName="kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187284 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4" volumeName="kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187300 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d633c5-e0aa-4fb6-83e0-a2e976334406" volumeName="kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187315 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="573d3a02-e395-4816-963a-cd614ef53f75" volumeName="kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187340 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187357 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9fa104a-4979-4023-8d7e-a965f11bc7db" volumeName="kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187374 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07a4fd92-0fd1-4688-b2db-de615d75971e" volumeName="kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187390 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d633c5-e0aa-4fb6-83e0-a2e976334406" volumeName="kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187405 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187420 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="260c8aa5-a288-4ee8-b671-f97e90a2f39c" volumeName="kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187436 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" volumeName="kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187457 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d0b7f60-c32e-48a6-b9e9-87c8f018367d" volumeName="kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187477 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e025d334-20e7-491f-8027-194251398747" volumeName="kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187496 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9fa104a-4979-4023-8d7e-a965f11bc7db" volumeName="kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187518 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a6ab2be-d018-4fd5-bfbb-6b88aec28663" volumeName="kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187534 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" volumeName="kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187549 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2ade7e6-cecd-4e98-8f85-ea8219303d75" volumeName="kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187564 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d633c5-e0aa-4fb6-83e0-a2e976334406" volumeName="kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187582 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="573d3a02-e395-4816-963a-cd614ef53f75" volumeName="kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187597 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" volumeName="kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187613 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0280499-8277-46f0-bd8c-058a47a99e19" volumeName="kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187628 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187643 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec11012b-536a-422f-afc4-d2d0fd4b67fb" volumeName="kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187662 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5982111d-f4c6-4335-9b40-3142758fc2bc" volumeName="kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187676 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59d50dd5-6793-4f96-a769-31e086ecc7e4" volumeName="kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187693 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="866c259c-7661-4a80-873b-6fd625218665" volumeName="kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187708 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187725 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4" volumeName="kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187777 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" volumeName="kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187792 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187810 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7962fb40-1170-4c00-b1bf-92966aeae807" volumeName="kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187823 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" volumeName="kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187840 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edc7f629-4288-443b-aa8e-78bc6a09c848" volumeName="kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187876 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="260c8aa5-a288-4ee8-b671-f97e90a2f39c" volumeName="kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187891 7620 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4" volumeName="kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config" seLinuxMountContext="" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187906 7620 reconstruct.go:97] "Volume reconstruction finished" Mar 18 08:48:56.189926 master-0 kubenswrapper[7620]: I0318 08:48:56.187917 7620 reconciler.go:26] "Reconciler: start to sync state" Mar 18 08:48:56.198010 master-0 kubenswrapper[7620]: I0318 08:48:56.190344 7620 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 08:48:56.221018 master-0 kubenswrapper[7620]: I0318 08:48:56.220808 7620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 08:48:56.222672 master-0 kubenswrapper[7620]: I0318 08:48:56.222605 7620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 08:48:56.222748 master-0 kubenswrapper[7620]: I0318 08:48:56.222681 7620 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 08:48:56.222949 master-0 kubenswrapper[7620]: I0318 08:48:56.222926 7620 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 08:48:56.223001 master-0 kubenswrapper[7620]: E0318 08:48:56.222982 7620 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 08:48:56.226689 master-0 kubenswrapper[7620]: I0318 08:48:56.226655 7620 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 08:48:56.236457 master-0 kubenswrapper[7620]: I0318 08:48:56.236412 7620 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="b90404fea2dcee705335febe9902c2cb9057e6f3ac0a9b235a9e5ecb1660d666" exitCode=0 Mar 18 08:48:56.236457 master-0 kubenswrapper[7620]: I0318 08:48:56.236451 7620 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="5ff838c2d5ef301a4d391cdf94caa10d8ed9cf1ecae148154167ecb368e38ae1" exitCode=0 Mar 18 08:48:56.236457 master-0 kubenswrapper[7620]: I0318 08:48:56.236459 7620 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="2d0a2c2dc41ce3fdaa0eb263dbdcc431c85c8b6b65a032320a020b41e4119800" exitCode=0 Mar 18 08:48:56.236673 master-0 kubenswrapper[7620]: I0318 08:48:56.236470 7620 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="4e7c826e1670b530a9fd33f7eb549f98d247eb166d6206beef67f781b2a470af" exitCode=0 Mar 18 08:48:56.236673 master-0 kubenswrapper[7620]: I0318 08:48:56.236479 7620 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="087da5f6d44511af7f32a791cdbe22a09cb7c15552db037f0bacb605d9163341" exitCode=0 Mar 18 08:48:56.236673 master-0 kubenswrapper[7620]: I0318 08:48:56.236488 7620 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="adde235643fbff8c27e9f475aac6b49079f9d822aa89abb8fde8b8cfe9cfc68c" exitCode=0 Mar 18 08:48:56.238931 master-0 kubenswrapper[7620]: I0318 08:48:56.238840 7620 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="f2d4d2d49e0c856fff93c30b0d719c8529754ea148952a7ef6bb3db593f16a16" exitCode=0 Mar 18 08:48:56.243336 master-0 kubenswrapper[7620]: I0318 08:48:56.243300 7620 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="cae6edc05ec437bf1216d8818e262c95bff15d2f9aa2f76f2a55bc0b5ab23801" exitCode=1 Mar 18 08:48:56.255235 master-0 kubenswrapper[7620]: I0318 08:48:56.255198 7620 generic.go:334] "Generic (PLEG): container finished" podID="97215428-2d5d-460f-947c-f2a490bc428d" containerID="af45d378024ee7c220ba697e8109094cfb054515091d9efd5c22113a8f02ec12" exitCode=0 Mar 18 08:48:56.260637 master-0 kubenswrapper[7620]: I0318 08:48:56.260605 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 08:48:56.261319 master-0 kubenswrapper[7620]: I0318 08:48:56.261285 7620 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc" exitCode=1 Mar 18 08:48:56.261319 master-0 kubenswrapper[7620]: I0318 08:48:56.261317 7620 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="60b7a6828ff9115f3e360da4ea3b39ddb71f9d86fc37454c4e2b71253e2b011f" exitCode=0 Mar 18 08:48:56.263108 master-0 kubenswrapper[7620]: I0318 08:48:56.263081 7620 generic.go:334] "Generic (PLEG): container finished" podID="51cee994-bbd7-45f2-9757-c270d47c276a" containerID="51dc55afbcfce4c386c5bd0bc1deafcfc0ec711be4ef96fdaaef56b5f72c67a2" exitCode=0 Mar 18 08:48:56.277729 master-0 kubenswrapper[7620]: I0318 08:48:56.277616 7620 generic.go:334] "Generic (PLEG): container finished" podID="2207df9e-f21e-4c30-98d5-248ae99c245e" containerID="4ab7ce18ff8c455a08cc88d97fdc9cc8dc555138a8a11da35cc907f8c6e70d0d" exitCode=0 Mar 18 08:48:56.320290 master-0 kubenswrapper[7620]: I0318 08:48:56.320245 7620 manager.go:324] Recovery completed Mar 18 08:48:56.323191 master-0 kubenswrapper[7620]: E0318 08:48:56.323159 7620 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 08:48:56.364492 master-0 kubenswrapper[7620]: I0318 08:48:56.364388 7620 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 08:48:56.364492 master-0 kubenswrapper[7620]: I0318 08:48:56.364413 7620 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 08:48:56.364492 master-0 kubenswrapper[7620]: I0318 08:48:56.364433 7620 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:48:56.364701 master-0 kubenswrapper[7620]: I0318 08:48:56.364616 7620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 08:48:56.364701 master-0 kubenswrapper[7620]: I0318 08:48:56.364629 7620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 08:48:56.364701 master-0 kubenswrapper[7620]: I0318 08:48:56.364678 7620 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 08:48:56.364701 master-0 kubenswrapper[7620]: I0318 08:48:56.364684 7620 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 08:48:56.364701 master-0 kubenswrapper[7620]: I0318 08:48:56.364691 7620 policy_none.go:49] "None policy: Start" Mar 18 08:48:56.366489 master-0 kubenswrapper[7620]: I0318 08:48:56.366411 7620 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 08:48:56.366552 master-0 kubenswrapper[7620]: I0318 08:48:56.366531 7620 state_mem.go:35] "Initializing new in-memory state store" Mar 18 08:48:56.367024 master-0 kubenswrapper[7620]: I0318 08:48:56.366992 7620 state_mem.go:75] "Updated machine memory state" Mar 18 08:48:56.367078 master-0 kubenswrapper[7620]: I0318 08:48:56.367033 7620 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 08:48:56.379287 master-0 kubenswrapper[7620]: I0318 08:48:56.379242 7620 manager.go:334] "Starting Device Plugin manager" Mar 18 08:48:56.379426 master-0 kubenswrapper[7620]: I0318 08:48:56.379318 7620 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 08:48:56.379426 master-0 kubenswrapper[7620]: I0318 08:48:56.379340 7620 server.go:79] "Starting device plugin registration server" Mar 18 08:48:56.379932 master-0 kubenswrapper[7620]: I0318 08:48:56.379908 7620 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 08:48:56.379995 master-0 kubenswrapper[7620]: I0318 08:48:56.379934 7620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 08:48:56.380077 master-0 kubenswrapper[7620]: I0318 08:48:56.380060 7620 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 08:48:56.380159 master-0 kubenswrapper[7620]: I0318 08:48:56.380143 7620 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 08:48:56.380159 master-0 kubenswrapper[7620]: I0318 08:48:56.380154 7620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 08:48:56.481383 master-0 kubenswrapper[7620]: I0318 08:48:56.481344 7620 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:48:56.484560 master-0 kubenswrapper[7620]: I0318 08:48:56.483519 7620 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:48:56.484560 master-0 kubenswrapper[7620]: I0318 08:48:56.483552 7620 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:48:56.484560 master-0 kubenswrapper[7620]: I0318 08:48:56.483563 7620 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:48:56.484560 master-0 kubenswrapper[7620]: I0318 08:48:56.483613 7620 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:48:56.493447 master-0 kubenswrapper[7620]: I0318 08:48:56.492603 7620 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 08:48:56.493447 master-0 kubenswrapper[7620]: I0318 08:48:56.492900 7620 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 08:48:56.523864 master-0 kubenswrapper[7620]: I0318 08:48:56.523749 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 18 08:48:56.524761 master-0 kubenswrapper[7620]: I0318 08:48:56.524657 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"b0564925d47f5840821e3c795a9cfcae45b42d4975ada3f3aedc3639ab59cfb5"} Mar 18 08:48:56.524761 master-0 kubenswrapper[7620]: I0318 08:48:56.524752 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"5ec3e7108eee8c08ca66f6f618d1955dea098f10f4832f7e925bd7f46bce001f"} Mar 18 08:48:56.524838 master-0 kubenswrapper[7620]: I0318 08:48:56.524792 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"f2d4d2d49e0c856fff93c30b0d719c8529754ea148952a7ef6bb3db593f16a16"} Mar 18 08:48:56.524838 master-0 kubenswrapper[7620]: I0318 08:48:56.524807 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a"} Mar 18 08:48:56.524838 master-0 kubenswrapper[7620]: I0318 08:48:56.524819 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"6be6b0de4a5d0386d8a94651962cc0001d3124e6eb513e3b68435d030ea24841"} Mar 18 08:48:56.524838 master-0 kubenswrapper[7620]: I0318 08:48:56.524831 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d"} Mar 18 08:48:56.524838 master-0 kubenswrapper[7620]: I0318 08:48:56.524842 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"cae6edc05ec437bf1216d8818e262c95bff15d2f9aa2f76f2a55bc0b5ab23801"} Mar 18 08:48:56.525008 master-0 kubenswrapper[7620]: I0318 08:48:56.524881 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"4d17f4a7fe14a2a472c626baa31e2712ee04373a3644e0529ddf244e8afaa854"} Mar 18 08:48:56.525008 master-0 kubenswrapper[7620]: I0318 08:48:56.524911 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c86f0daa1af8b571957ffb1df5a750b21d97fe93761c60692060e0a17515fcbd" Mar 18 08:48:56.525008 master-0 kubenswrapper[7620]: I0318 08:48:56.524926 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"128a5d65976993628d981fee7385d5588c74fc7f9ab0a6e9bb3f72584d42ed3d"} Mar 18 08:48:56.525008 master-0 kubenswrapper[7620]: I0318 08:48:56.524937 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc"} Mar 18 08:48:56.525008 master-0 kubenswrapper[7620]: I0318 08:48:56.524972 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"60b7a6828ff9115f3e360da4ea3b39ddb71f9d86fc37454c4e2b71253e2b011f"} Mar 18 08:48:56.525008 master-0 kubenswrapper[7620]: I0318 08:48:56.524982 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb"} Mar 18 08:48:56.525008 master-0 kubenswrapper[7620]: I0318 08:48:56.524997 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e405be03e85526b1d05a9e6638d9433f5fcf432c4e04e5890d5bc45664d267c7" Mar 18 08:48:56.525008 master-0 kubenswrapper[7620]: I0318 08:48:56.525008 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f"} Mar 18 08:48:56.525376 master-0 kubenswrapper[7620]: I0318 08:48:56.525075 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de"} Mar 18 08:48:56.525376 master-0 kubenswrapper[7620]: I0318 08:48:56.525091 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"c10d1b81b0a7054da8fb12459aa720b7916f5484be5a832bdacdc31fad36d2cc"} Mar 18 08:48:56.525376 master-0 kubenswrapper[7620]: I0318 08:48:56.525163 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"56c1813fc6a99c6be68188fda55c9aa95683f9493caa43861ba04693d0ba89d2"} Mar 18 08:48:56.525376 master-0 kubenswrapper[7620]: I0318 08:48:56.525181 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1"} Mar 18 08:48:56.525376 master-0 kubenswrapper[7620]: I0318 08:48:56.525196 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7e091cc6956264f5530fa4606adc44124201440fb69d366bae9e4dd97d842f" Mar 18 08:48:56.536198 master-0 kubenswrapper[7620]: W0318 08:48:56.536154 7620 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 08:48:56.536343 master-0 kubenswrapper[7620]: E0318 08:48:56.536250 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:56.539309 master-0 kubenswrapper[7620]: E0318 08:48:56.539280 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.539500 master-0 kubenswrapper[7620]: E0318 08:48:56.539463 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.539637 master-0 kubenswrapper[7620]: E0318 08:48:56.539595 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:56.539927 master-0 kubenswrapper[7620]: E0318 08:48:56.539901 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:56.592580 master-0 kubenswrapper[7620]: I0318 08:48:56.592523 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:56.592580 master-0 kubenswrapper[7620]: I0318 08:48:56.592569 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.592786 master-0 kubenswrapper[7620]: I0318 08:48:56.592604 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.592786 master-0 kubenswrapper[7620]: I0318 08:48:56.592703 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.592786 master-0 kubenswrapper[7620]: I0318 08:48:56.592766 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.592900 master-0 kubenswrapper[7620]: I0318 08:48:56.592811 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:56.592900 master-0 kubenswrapper[7620]: I0318 08:48:56.592838 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:56.592900 master-0 kubenswrapper[7620]: I0318 08:48:56.592874 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.593001 master-0 kubenswrapper[7620]: I0318 08:48:56.592904 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.593001 master-0 kubenswrapper[7620]: I0318 08:48:56.592926 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.593001 master-0 kubenswrapper[7620]: I0318 08:48:56.592943 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.593001 master-0 kubenswrapper[7620]: I0318 08:48:56.592961 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:56.593001 master-0 kubenswrapper[7620]: I0318 08:48:56.592982 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.593150 master-0 kubenswrapper[7620]: I0318 08:48:56.593009 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.593150 master-0 kubenswrapper[7620]: I0318 08:48:56.593026 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:56.593150 master-0 kubenswrapper[7620]: I0318 08:48:56.593066 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:56.593150 master-0 kubenswrapper[7620]: I0318 08:48:56.593085 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.694201 master-0 kubenswrapper[7620]: I0318 08:48:56.694044 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:56.694201 master-0 kubenswrapper[7620]: I0318 08:48:56.694112 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.694201 master-0 kubenswrapper[7620]: I0318 08:48:56.694135 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.694605 master-0 kubenswrapper[7620]: I0318 08:48:56.694338 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:56.694693 master-0 kubenswrapper[7620]: I0318 08:48:56.694573 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.694789 master-0 kubenswrapper[7620]: I0318 08:48:56.694709 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.694789 master-0 kubenswrapper[7620]: I0318 08:48:56.694760 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:56.694996 master-0 kubenswrapper[7620]: I0318 08:48:56.694820 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:56.694996 master-0 kubenswrapper[7620]: I0318 08:48:56.694903 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:56.695122 master-0 kubenswrapper[7620]: I0318 08:48:56.695028 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:56.695122 master-0 kubenswrapper[7620]: I0318 08:48:56.695102 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.695260 master-0 kubenswrapper[7620]: I0318 08:48:56.695134 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.695260 master-0 kubenswrapper[7620]: I0318 08:48:56.695152 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.695260 master-0 kubenswrapper[7620]: I0318 08:48:56.695170 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:56.695260 master-0 kubenswrapper[7620]: I0318 08:48:56.695190 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.695578 master-0 kubenswrapper[7620]: I0318 08:48:56.695255 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.695578 master-0 kubenswrapper[7620]: I0318 08:48:56.695341 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.695578 master-0 kubenswrapper[7620]: I0318 08:48:56.695411 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.695578 master-0 kubenswrapper[7620]: I0318 08:48:56.695460 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.695578 master-0 kubenswrapper[7620]: I0318 08:48:56.695483 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.695578 master-0 kubenswrapper[7620]: I0318 08:48:56.695571 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.695634 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.695686 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.695682 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.695778 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.695798 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.695910 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.695963 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.696015 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.696073 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.696090 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.696126 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.696131 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:56.696208 master-0 kubenswrapper[7620]: I0318 08:48:56.696224 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:57.149876 master-0 kubenswrapper[7620]: I0318 08:48:57.149804 7620 apiserver.go:52] "Watching apiserver" Mar 18 08:48:57.166094 master-0 kubenswrapper[7620]: I0318 08:48:57.165997 7620 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 08:48:57.167654 master-0 kubenswrapper[7620]: I0318 08:48:57.167578 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb","openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf","openshift-ovn-kubernetes/ovnkube-node-cxws9","openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7","openshift-network-operator/iptables-alerter-9mkgd","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh","openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr","openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth","openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64","openshift-network-operator/network-operator-7bd846bfc4-5r5r4","kube-system/bootstrap-kube-scheduler-master-0","openshift-marketplace/marketplace-operator-89ccd998f-bcwsv","openshift-network-node-identity/network-node-identity-n5vqx","openshift-ingress-operator/ingress-operator-66b84d69b-7h94d","openshift-multus/network-metrics-daemon-6x85n","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9","openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j","openshift-etcd/etcd-master-0-master-0","openshift-multus/multus-bpf5c","openshift-network-diagnostics/network-check-target-8b7l7","assisted-installer/assisted-installer-controller-zq2ds","kube-system/bootstrap-kube-controller-manager-master-0","openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7","openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg","openshift-multus/multus-additional-cni-plugins-xpzrz","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-dns-operator/dns-operator-9c5679d8f-b9pn7","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82"] Mar 18 08:48:57.167953 master-0 kubenswrapper[7620]: I0318 08:48:57.167913 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 08:48:57.168048 master-0 kubenswrapper[7620]: I0318 08:48:57.167983 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.168048 master-0 kubenswrapper[7620]: I0318 08:48:57.168047 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:57.169932 master-0 kubenswrapper[7620]: I0318 08:48:57.168961 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:57.169932 master-0 kubenswrapper[7620]: I0318 08:48:57.169016 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.169932 master-0 kubenswrapper[7620]: I0318 08:48:57.169806 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.169932 master-0 kubenswrapper[7620]: I0318 08:48:57.173631 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:57.169932 master-0 kubenswrapper[7620]: I0318 08:48:57.173940 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:57.169932 master-0 kubenswrapper[7620]: I0318 08:48:57.173930 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.176248 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.176379 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.176437 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.176449 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.178057 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.178196 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.178473 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.178481 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.178679 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.179166 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.179221 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.179309 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.179569 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.179833 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.179879 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.180034 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.180332 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.181503 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.181744 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.181743 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.181751 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.181935 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 08:48:57.181926 master-0 kubenswrapper[7620]: I0318 08:48:57.181950 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 08:48:57.183752 master-0 kubenswrapper[7620]: I0318 08:48:57.182209 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 08:48:57.183752 master-0 kubenswrapper[7620]: I0318 08:48:57.183385 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 08:48:57.185291 master-0 kubenswrapper[7620]: I0318 08:48:57.185228 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.192728 master-0 kubenswrapper[7620]: I0318 08:48:57.188666 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 08:48:57.192728 master-0 kubenswrapper[7620]: I0318 08:48:57.189340 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 08:48:57.192728 master-0 kubenswrapper[7620]: I0318 08:48:57.189368 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 08:48:57.192728 master-0 kubenswrapper[7620]: I0318 08:48:57.189459 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.194423 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198551 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198611 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198640 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj9fr\" (UniqueName: \"kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198667 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjmx\" (UniqueName: \"kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198709 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198756 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198787 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198819 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198879 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198914 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198943 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.198969 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.199357 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.199595 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.199892 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.200451 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfjgn\" (UniqueName: \"kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.200522 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8prf\" (UniqueName: \"kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.200634 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.200774 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.200824 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz26d\" (UniqueName: \"kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.200889 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.200915 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.200961 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201003 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201060 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201085 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201099 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201125 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201186 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201225 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201295 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201322 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201371 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msp8\" (UniqueName: \"kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201415 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201447 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201445 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201538 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hxtz\" (UniqueName: \"kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201609 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201644 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9w7l\" (UniqueName: \"kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201678 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201713 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201744 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201777 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svdhs\" (UniqueName: \"kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201813 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201815 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.201931 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202067 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202094 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202157 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202182 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202243 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6zq8\" (UniqueName: \"kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202289 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202473 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202505 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202531 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202559 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202588 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202617 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.202643 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203090 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203124 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203154 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwg9\" (UniqueName: \"kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203350 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203394 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203421 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203436 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203444 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203502 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203533 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203583 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxcn\" (UniqueName: \"kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-j8kgj\" (UID: \"6fb1f871-9c24-48a1-a15a-a636b5bb687d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203681 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203754 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203814 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203918 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.203996 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpl2c\" (UniqueName: \"kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.204092 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.204135 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.204164 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrdc\" (UniqueName: \"kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.204191 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:57.204075 master-0 kubenswrapper[7620]: I0318 08:48:57.204223 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.204776 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.205166 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.205619 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.205776 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.205913 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.205921 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.206176 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.206871 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.206901 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.206927 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.206949 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207027 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207053 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlp7w\" (UniqueName: \"kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207075 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207175 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207292 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207360 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207366 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207426 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207433 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207662 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207720 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207724 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207804 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxvk7\" (UniqueName: \"kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207842 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207895 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207905 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207926 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.207979 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208007 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208034 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208143 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208184 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208205 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208216 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208244 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208271 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208297 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208323 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208346 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208371 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glt6c\" (UniqueName: \"kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208396 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208419 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208444 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208465 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208490 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w58l\" (UniqueName: \"kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208513 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n959l\" (UniqueName: \"kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208536 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208563 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47p9x\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208585 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208611 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208635 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208661 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208687 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lsw9\" (UniqueName: \"kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208717 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208741 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208765 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9jq\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208790 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209203 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209300 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209617 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209971 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208061 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209998 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210018 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210045 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210069 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210096 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210112 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210136 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ngk7\" (UniqueName: \"kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210144 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210158 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210286 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210314 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210366 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210400 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw27k\" (UniqueName: \"kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210454 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hn9w\" (UniqueName: \"kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210485 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfzdk\" (UniqueName: \"kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210543 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210584 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210588 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208353 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210665 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208394 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210939 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208568 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208621 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.208706 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209046 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209186 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209244 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209352 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209572 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.209598 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210159 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 08:48:57.214408 master-0 kubenswrapper[7620]: I0318 08:48:57.210234 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.212258 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.212891 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.213029 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.213285 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.213293 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.213413 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.210449 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.210497 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.213620 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.210598 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.210620 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.213294 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.210457 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.211795 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.215965 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.216222 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.216414 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.216635 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.216814 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.216954 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.217104 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.217461 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.217738 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.217993 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.218322 master-0 kubenswrapper[7620]: I0318 08:48:57.218118 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 08:48:57.219204 master-0 kubenswrapper[7620]: I0318 08:48:57.218748 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 08:48:57.219204 master-0 kubenswrapper[7620]: I0318 08:48:57.218910 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.219280 master-0 kubenswrapper[7620]: I0318 08:48:57.219233 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.219737 master-0 kubenswrapper[7620]: I0318 08:48:57.219676 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:57.219911 master-0 kubenswrapper[7620]: I0318 08:48:57.219793 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.220008 master-0 kubenswrapper[7620]: I0318 08:48:57.219952 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:57.220206 master-0 kubenswrapper[7620]: I0318 08:48:57.220157 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:57.220411 master-0 kubenswrapper[7620]: I0318 08:48:57.220374 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.220710 master-0 kubenswrapper[7620]: I0318 08:48:57.220674 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:57.220809 master-0 kubenswrapper[7620]: I0318 08:48:57.220772 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:57.221148 master-0 kubenswrapper[7620]: I0318 08:48:57.220680 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.221372 master-0 kubenswrapper[7620]: I0318 08:48:57.221337 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.222149 master-0 kubenswrapper[7620]: I0318 08:48:57.222048 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:57.222536 master-0 kubenswrapper[7620]: I0318 08:48:57.222491 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 08:48:57.222694 master-0 kubenswrapper[7620]: I0318 08:48:57.222575 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 08:48:57.222834 master-0 kubenswrapper[7620]: I0318 08:48:57.222793 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 08:48:57.222967 master-0 kubenswrapper[7620]: I0318 08:48:57.222800 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.223247 master-0 kubenswrapper[7620]: I0318 08:48:57.223216 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.223342 master-0 kubenswrapper[7620]: I0318 08:48:57.223315 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 08:48:57.223427 master-0 kubenswrapper[7620]: I0318 08:48:57.223395 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 08:48:57.223474 master-0 kubenswrapper[7620]: I0318 08:48:57.223440 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 08:48:57.223581 master-0 kubenswrapper[7620]: I0318 08:48:57.223557 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 08:48:57.223656 master-0 kubenswrapper[7620]: I0318 08:48:57.223635 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 08:48:57.223741 master-0 kubenswrapper[7620]: I0318 08:48:57.223725 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 08:48:57.223943 master-0 kubenswrapper[7620]: I0318 08:48:57.223888 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 08:48:57.223995 master-0 kubenswrapper[7620]: I0318 08:48:57.223977 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 08:48:57.224169 master-0 kubenswrapper[7620]: I0318 08:48:57.224145 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.224387 master-0 kubenswrapper[7620]: I0318 08:48:57.224336 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 08:48:57.224387 master-0 kubenswrapper[7620]: I0318 08:48:57.224371 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 08:48:57.224842 master-0 kubenswrapper[7620]: I0318 08:48:57.224802 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 08:48:57.224919 master-0 kubenswrapper[7620]: I0318 08:48:57.224832 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 08:48:57.225229 master-0 kubenswrapper[7620]: I0318 08:48:57.225107 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 08:48:57.225229 master-0 kubenswrapper[7620]: I0318 08:48:57.225210 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 08:48:57.225616 master-0 kubenswrapper[7620]: I0318 08:48:57.225581 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 08:48:57.226330 master-0 kubenswrapper[7620]: I0318 08:48:57.226270 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 08:48:57.226548 master-0 kubenswrapper[7620]: I0318 08:48:57.226473 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 08:48:57.227363 master-0 kubenswrapper[7620]: I0318 08:48:57.227014 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 08:48:57.227363 master-0 kubenswrapper[7620]: I0318 08:48:57.227341 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:57.228395 master-0 kubenswrapper[7620]: I0318 08:48:57.227644 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.228395 master-0 kubenswrapper[7620]: I0318 08:48:57.228104 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:57.228934 master-0 kubenswrapper[7620]: I0318 08:48:57.228897 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:57.229067 master-0 kubenswrapper[7620]: I0318 08:48:57.228911 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:57.234159 master-0 kubenswrapper[7620]: I0318 08:48:57.233724 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:57.234159 master-0 kubenswrapper[7620]: I0318 08:48:57.233777 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:57.234159 master-0 kubenswrapper[7620]: I0318 08:48:57.233866 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.234466 master-0 kubenswrapper[7620]: I0318 08:48:57.234276 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:57.238874 master-0 kubenswrapper[7620]: I0318 08:48:57.238635 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 08:48:57.239739 master-0 kubenswrapper[7620]: I0318 08:48:57.239686 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.240934 master-0 kubenswrapper[7620]: I0318 08:48:57.240783 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 08:48:57.242307 master-0 kubenswrapper[7620]: I0318 08:48:57.242267 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 08:48:57.243941 master-0 kubenswrapper[7620]: I0318 08:48:57.243900 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 08:48:57.248745 master-0 kubenswrapper[7620]: I0318 08:48:57.248686 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.249012 master-0 kubenswrapper[7620]: I0318 08:48:57.248956 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.251790 master-0 kubenswrapper[7620]: I0318 08:48:57.251751 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 08:48:57.261160 master-0 kubenswrapper[7620]: I0318 08:48:57.261129 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.271714 master-0 kubenswrapper[7620]: I0318 08:48:57.271672 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 08:48:57.279005 master-0 kubenswrapper[7620]: I0318 08:48:57.278963 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.280481 master-0 kubenswrapper[7620]: I0318 08:48:57.280427 7620 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 08:48:57.311726 master-0 kubenswrapper[7620]: I0318 08:48:57.311678 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.311821 master-0 kubenswrapper[7620]: I0318 08:48:57.311732 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:57.311821 master-0 kubenswrapper[7620]: I0318 08:48:57.311768 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.311821 master-0 kubenswrapper[7620]: I0318 08:48:57.311809 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdvp\" (UniqueName: \"kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:57.312014 master-0 kubenswrapper[7620]: I0318 08:48:57.311964 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.312258 master-0 kubenswrapper[7620]: I0318 08:48:57.312207 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.312355 master-0 kubenswrapper[7620]: I0318 08:48:57.312324 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.312405 master-0 kubenswrapper[7620]: I0318 08:48:57.312371 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.312485 master-0 kubenswrapper[7620]: I0318 08:48:57.312447 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.312627 master-0 kubenswrapper[7620]: I0318 08:48:57.312581 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:57.312712 master-0 kubenswrapper[7620]: I0318 08:48:57.312655 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.312712 master-0 kubenswrapper[7620]: I0318 08:48:57.312654 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:57.312800 master-0 kubenswrapper[7620]: I0318 08:48:57.312711 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.312800 master-0 kubenswrapper[7620]: I0318 08:48:57.312763 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.312925 master-0 kubenswrapper[7620]: I0318 08:48:57.312823 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.312925 master-0 kubenswrapper[7620]: I0318 08:48:57.312830 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.312925 master-0 kubenswrapper[7620]: I0318 08:48:57.312836 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:57.312925 master-0 kubenswrapper[7620]: E0318 08:48:57.312820 7620 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:57.313072 master-0 kubenswrapper[7620]: I0318 08:48:57.312929 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.313072 master-0 kubenswrapper[7620]: E0318 08:48:57.312953 7620 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:57.313072 master-0 kubenswrapper[7620]: I0318 08:48:57.312957 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.313072 master-0 kubenswrapper[7620]: E0318 08:48:57.313018 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.812979937 +0000 UTC m=+1.807761869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : secret "metrics-daemon-secret" not found Mar 18 08:48:57.313072 master-0 kubenswrapper[7620]: I0318 08:48:57.313029 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: E0318 08:48:57.313100 7620 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313092 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: E0318 08:48:57.313156 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.813129962 +0000 UTC m=+1.807911714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313185 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: E0318 08:48:57.313203 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.813197184 +0000 UTC m=+1.807978936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313270 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313358 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313410 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313458 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313457 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313370 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313532 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313566 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313606 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313664 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313702 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313725 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.313817 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.314003 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.314087 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.314100 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: I0318 08:48:57.314124 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.314158 master-0 kubenswrapper[7620]: E0318 08:48:57.314189 7620 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314210 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314220 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.814210334 +0000 UTC m=+1.808992086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314263 7620 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314287 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.814280546 +0000 UTC m=+1.809062298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314279 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314335 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314369 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314374 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314397 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314425 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314451 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314493 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314509 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314508 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314563 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314632 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314640 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.814573745 +0000 UTC m=+1.809355537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314692 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.814683658 +0000 UTC m=+1.809465620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314693 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314731 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.814709049 +0000 UTC m=+1.809491081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314740 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314737 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314785 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314694 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314875 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314906 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314919 7620 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314962 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.314974 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.814950666 +0000 UTC m=+1.809732428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.314997 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.315005 7620 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.315060 7620 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.315066 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.815045199 +0000 UTC m=+1.809827181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: E0318 08:48:57.315090 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.81508191 +0000 UTC m=+1.809863672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.315117 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.315147 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.315156 master-0 kubenswrapper[7620]: I0318 08:48:57.315172 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315222 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315244 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315279 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315261 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315323 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315358 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315374 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315391 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315414 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315455 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315487 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315508 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315514 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315528 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315594 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315603 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315612 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315634 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315692 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315698 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315715 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: E0318 08:48:57.315770 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315791 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: E0318 08:48:57.315805 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.815788061 +0000 UTC m=+1.810569813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: E0318 08:48:57.315836 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: E0318 08:48:57.315878 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:57.815869734 +0000 UTC m=+1.810651726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315908 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315915 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315947 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.315971 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.316048 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.316084 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.316569 master-0 kubenswrapper[7620]: I0318 08:48:57.316062 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.323706 master-0 kubenswrapper[7620]: I0318 08:48:57.323667 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjmx\" (UniqueName: \"kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 08:48:57.345996 master-0 kubenswrapper[7620]: I0318 08:48:57.345941 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfjgn\" (UniqueName: \"kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 08:48:57.362312 master-0 kubenswrapper[7620]: I0318 08:48:57.362227 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8prf\" (UniqueName: \"kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 08:48:57.386554 master-0 kubenswrapper[7620]: I0318 08:48:57.386498 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz26d\" (UniqueName: \"kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:57.405255 master-0 kubenswrapper[7620]: I0318 08:48:57.405119 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msp8\" (UniqueName: \"kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:57.418958 master-0 kubenswrapper[7620]: I0318 08:48:57.418920 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:57.419142 master-0 kubenswrapper[7620]: I0318 08:48:57.419059 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:57.423265 master-0 kubenswrapper[7620]: I0318 08:48:57.423234 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hxtz\" (UniqueName: \"kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:57.451933 master-0 kubenswrapper[7620]: I0318 08:48:57.451884 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9w7l\" (UniqueName: \"kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 08:48:57.464605 master-0 kubenswrapper[7620]: I0318 08:48:57.464565 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdhs\" (UniqueName: \"kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 08:48:57.474407 master-0 kubenswrapper[7620]: I0318 08:48:57.473533 7620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 08:48:57.490004 master-0 kubenswrapper[7620]: I0318 08:48:57.489956 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6zq8\" (UniqueName: \"kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:57.505184 master-0 kubenswrapper[7620]: I0318 08:48:57.505145 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 08:48:57.528968 master-0 kubenswrapper[7620]: I0318 08:48:57.528713 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwg9\" (UniqueName: \"kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 08:48:57.543415 master-0 kubenswrapper[7620]: I0318 08:48:57.543355 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj9fr\" (UniqueName: \"kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:57.569536 master-0 kubenswrapper[7620]: I0318 08:48:57.569486 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.583671 master-0 kubenswrapper[7620]: I0318 08:48:57.583614 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpl2c\" (UniqueName: \"kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 08:48:57.604410 master-0 kubenswrapper[7620]: I0318 08:48:57.604352 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrdc\" (UniqueName: \"kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:57.626392 master-0 kubenswrapper[7620]: I0318 08:48:57.626325 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxcn\" (UniqueName: \"kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-j8kgj\" (UID: \"6fb1f871-9c24-48a1-a15a-a636b5bb687d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 08:48:57.644822 master-0 kubenswrapper[7620]: I0318 08:48:57.644759 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlp7w\" (UniqueName: \"kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:57.665591 master-0 kubenswrapper[7620]: I0318 08:48:57.665389 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.684601 master-0 kubenswrapper[7620]: I0318 08:48:57.684529 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glt6c\" (UniqueName: \"kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 08:48:57.703484 master-0 kubenswrapper[7620]: I0318 08:48:57.703430 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 08:48:57.723313 master-0 kubenswrapper[7620]: I0318 08:48:57.723256 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n959l\" (UniqueName: \"kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:48:57.744426 master-0 kubenswrapper[7620]: I0318 08:48:57.744368 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47p9x\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.765263 master-0 kubenswrapper[7620]: I0318 08:48:57.765199 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9jq\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.787934 master-0 kubenswrapper[7620]: I0318 08:48:57.787896 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lsw9\" (UniqueName: \"kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.803753 master-0 kubenswrapper[7620]: I0318 08:48:57.803701 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w58l\" (UniqueName: \"kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 08:48:57.824923 master-0 kubenswrapper[7620]: I0318 08:48:57.824835 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.824923 master-0 kubenswrapper[7620]: I0318 08:48:57.824910 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:57.824923 master-0 kubenswrapper[7620]: I0318 08:48:57.824941 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.824982 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825014 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825036 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825116 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825135 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825156 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825183 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825207 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825224 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.825268 master-0 kubenswrapper[7620]: I0318 08:48:57.825245 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:57.825675 master-0 kubenswrapper[7620]: E0318 08:48:57.825353 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:57.825675 master-0 kubenswrapper[7620]: E0318 08:48:57.825404 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.825389268 +0000 UTC m=+2.820171020 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:57.825971 master-0 kubenswrapper[7620]: E0318 08:48:57.825920 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:57.825971 master-0 kubenswrapper[7620]: E0318 08:48:57.825974 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.825965885 +0000 UTC m=+2.820747637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:57.826089 master-0 kubenswrapper[7620]: E0318 08:48:57.826020 7620 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:57.826089 master-0 kubenswrapper[7620]: E0318 08:48:57.826046 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826038497 +0000 UTC m=+2.820820249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:48:57.826089 master-0 kubenswrapper[7620]: E0318 08:48:57.826089 7620 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:57.826213 master-0 kubenswrapper[7620]: E0318 08:48:57.826114 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826108139 +0000 UTC m=+2.820889891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:57.826213 master-0 kubenswrapper[7620]: E0318 08:48:57.826151 7620 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:57.826213 master-0 kubenswrapper[7620]: E0318 08:48:57.826167 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826162641 +0000 UTC m=+2.820944393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:48:57.826213 master-0 kubenswrapper[7620]: E0318 08:48:57.826197 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:57.826213 master-0 kubenswrapper[7620]: E0318 08:48:57.826213 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826208802 +0000 UTC m=+2.820990554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:57.826359 master-0 kubenswrapper[7620]: E0318 08:48:57.826246 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:57.826359 master-0 kubenswrapper[7620]: E0318 08:48:57.826265 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826259534 +0000 UTC m=+2.821041286 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:48:57.826359 master-0 kubenswrapper[7620]: E0318 08:48:57.826295 7620 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:57.826359 master-0 kubenswrapper[7620]: E0318 08:48:57.826313 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826307155 +0000 UTC m=+2.821088907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : secret "metrics-daemon-secret" not found Mar 18 08:48:57.826359 master-0 kubenswrapper[7620]: E0318 08:48:57.826345 7620 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:57.826359 master-0 kubenswrapper[7620]: E0318 08:48:57.826361 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826356137 +0000 UTC m=+2.821137889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:48:57.826534 master-0 kubenswrapper[7620]: E0318 08:48:57.826392 7620 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:57.826534 master-0 kubenswrapper[7620]: E0318 08:48:57.826410 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826405088 +0000 UTC m=+2.821186840 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:48:57.826534 master-0 kubenswrapper[7620]: E0318 08:48:57.826441 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:57.826534 master-0 kubenswrapper[7620]: E0318 08:48:57.826460 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.82645473 +0000 UTC m=+2.821236472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:48:57.826534 master-0 kubenswrapper[7620]: E0318 08:48:57.826492 7620 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:57.826534 master-0 kubenswrapper[7620]: E0318 08:48:57.826509 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826503551 +0000 UTC m=+2.821285303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:48:57.826534 master-0 kubenswrapper[7620]: E0318 08:48:57.826541 7620 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:57.826743 master-0 kubenswrapper[7620]: E0318 08:48:57.826559 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.826553913 +0000 UTC m=+2.821335665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:57.835046 master-0 kubenswrapper[7620]: I0318 08:48:57.834988 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw27k\" (UniqueName: \"kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:48:57.847838 master-0 kubenswrapper[7620]: I0318 08:48:57.847790 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxvk7\" (UniqueName: \"kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 08:48:57.875591 master-0 kubenswrapper[7620]: I0318 08:48:57.872320 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:57.886172 master-0 kubenswrapper[7620]: I0318 08:48:57.886113 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ngk7\" (UniqueName: \"kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 08:48:57.902021 master-0 kubenswrapper[7620]: I0318 08:48:57.901960 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfzdk\" (UniqueName: \"kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:57.923678 master-0 kubenswrapper[7620]: I0318 08:48:57.923449 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 08:48:57.949502 master-0 kubenswrapper[7620]: I0318 08:48:57.949411 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hn9w\" (UniqueName: \"kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:57.966958 master-0 kubenswrapper[7620]: W0318 08:48:57.966889 7620 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 08:48:57.967164 master-0 kubenswrapper[7620]: E0318 08:48:57.966997 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:57.980597 master-0 kubenswrapper[7620]: E0318 08:48:57.980521 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:57.999895 master-0 kubenswrapper[7620]: E0318 08:48:57.999813 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:58.019043 master-0 kubenswrapper[7620]: E0318 08:48:58.018979 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:58.037486 master-0 kubenswrapper[7620]: E0318 08:48:58.037419 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:58.085317 master-0 kubenswrapper[7620]: I0318 08:48:58.085263 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdvp\" (UniqueName: \"kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 08:48:58.103978 master-0 kubenswrapper[7620]: I0318 08:48:58.103930 7620 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 08:48:58.110198 master-0 kubenswrapper[7620]: I0318 08:48:58.109937 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:58.390896 master-0 kubenswrapper[7620]: I0318 08:48:58.386292 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:48:58.679473 master-0 kubenswrapper[7620]: E0318 08:48:58.679334 7620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" Mar 18 08:48:58.679717 master-0 kubenswrapper[7620]: E0318 08:48:58.679673 7620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftdvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-9mkgd_openshift-network-operator(866c259c-7661-4a80-873b-6fd625218665): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 08:48:58.681327 master-0 kubenswrapper[7620]: E0318 08:48:58.681269 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-9mkgd" podUID="866c259c-7661-4a80-873b-6fd625218665" Mar 18 08:48:58.837714 master-0 kubenswrapper[7620]: I0318 08:48:58.837239 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:48:58.838094 master-0 kubenswrapper[7620]: I0318 08:48:58.837724 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:48:58.838094 master-0 kubenswrapper[7620]: I0318 08:48:58.837765 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:48:58.838094 master-0 kubenswrapper[7620]: I0318 08:48:58.837801 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:58.838211 master-0 kubenswrapper[7620]: I0318 08:48:58.837828 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:48:58.838283 master-0 kubenswrapper[7620]: E0318 08:48:58.838236 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:58.838347 master-0 kubenswrapper[7620]: E0318 08:48:58.838309 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:58.838347 master-0 kubenswrapper[7620]: E0318 08:48:58.838327 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.838306824 +0000 UTC m=+4.833088596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:58.838434 master-0 kubenswrapper[7620]: E0318 08:48:58.838351 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.838338075 +0000 UTC m=+4.833119837 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:58.838506 master-0 kubenswrapper[7620]: E0318 08:48:58.838470 7620 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:58.838548 master-0 kubenswrapper[7620]: I0318 08:48:58.838498 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:48:58.838595 master-0 kubenswrapper[7620]: E0318 08:48:58.838551 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.838534521 +0000 UTC m=+4.833316273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:58.839709 master-0 kubenswrapper[7620]: I0318 08:48:58.839681 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:48:58.839784 master-0 kubenswrapper[7620]: I0318 08:48:58.839745 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:48:58.839784 master-0 kubenswrapper[7620]: I0318 08:48:58.839778 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:48:58.839895 master-0 kubenswrapper[7620]: I0318 08:48:58.839845 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:48:58.839895 master-0 kubenswrapper[7620]: I0318 08:48:58.839882 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:48:58.840040 master-0 kubenswrapper[7620]: I0318 08:48:58.839903 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:48:58.840040 master-0 kubenswrapper[7620]: I0318 08:48:58.839944 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:48:58.840132 master-0 kubenswrapper[7620]: E0318 08:48:58.838597 7620 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:58.840132 master-0 kubenswrapper[7620]: E0318 08:48:58.840085 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.840077027 +0000 UTC m=+4.834858779 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:48:58.840132 master-0 kubenswrapper[7620]: E0318 08:48:58.838626 7620 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:58.840132 master-0 kubenswrapper[7620]: E0318 08:48:58.840128 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.840119988 +0000 UTC m=+4.834901740 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:58.840132 master-0 kubenswrapper[7620]: E0318 08:48:58.839219 7620 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840159 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.840153219 +0000 UTC m=+4.834934971 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840022 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840186 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.84018122 +0000 UTC m=+4.834962972 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840053 7620 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840212 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.840205281 +0000 UTC m=+4.834987023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840250 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840266 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.840261522 +0000 UTC m=+4.835043274 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840301 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:58.840311 master-0 kubenswrapper[7620]: E0318 08:48:58.840319 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.840313964 +0000 UTC m=+4.835095716 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:48:58.840650 master-0 kubenswrapper[7620]: E0318 08:48:58.840354 7620 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:58.840650 master-0 kubenswrapper[7620]: E0318 08:48:58.840439 7620 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:58.840650 master-0 kubenswrapper[7620]: E0318 08:48:58.840465 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.840458918 +0000 UTC m=+4.835240670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:48:58.840650 master-0 kubenswrapper[7620]: E0318 08:48:58.840499 7620 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:58.840650 master-0 kubenswrapper[7620]: E0318 08:48:58.840510 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.840482469 +0000 UTC m=+4.835264221 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : secret "metrics-daemon-secret" not found Mar 18 08:48:58.840650 master-0 kubenswrapper[7620]: E0318 08:48:58.840532 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.84052431 +0000 UTC m=+4.835306062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:48:59.004118 master-0 kubenswrapper[7620]: I0318 08:48:59.000397 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-8b7l7"] Mar 18 08:48:59.091447 master-0 kubenswrapper[7620]: I0318 08:48:59.091413 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:59.103788 master-0 kubenswrapper[7620]: I0318 08:48:59.103761 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:59.127882 master-0 kubenswrapper[7620]: I0318 08:48:59.123604 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:59.201954 master-0 kubenswrapper[7620]: I0318 08:48:59.197113 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:48:59.256910 master-0 kubenswrapper[7620]: I0318 08:48:59.256515 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:59.309492 master-0 kubenswrapper[7620]: I0318 08:48:59.309440 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" event={"ID":"fcf89a76-7a94-46d3-853e-68e986563764","Type":"ContainerStarted","Data":"cc2fad03c96d37b754988a128065f6939d46f7a48a89eb78a7b395dfd2147290"} Mar 18 08:48:59.332475 master-0 kubenswrapper[7620]: I0318 08:48:59.332400 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" event={"ID":"772bc250-2e57-4ce0-883c-d44281fcb0be","Type":"ContainerStarted","Data":"fb1d8cdaae1091b519c657021dc4e61ba66eba83ec8f94dd444327353dc0ffc0"} Mar 18 08:48:59.341306 master-0 kubenswrapper[7620]: I0318 08:48:59.341262 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:59.351554 master-0 kubenswrapper[7620]: I0318 08:48:59.348691 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" event={"ID":"ec11012b-536a-422f-afc4-d2d0fd4b67fb","Type":"ContainerStarted","Data":"b192c774019baaa7e62a2cf9e287d09d05206c3fc1c24b73874462681a8ac04f"} Mar 18 08:48:59.351554 master-0 kubenswrapper[7620]: I0318 08:48:59.349150 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:59.369710 master-0 kubenswrapper[7620]: I0318 08:48:59.366106 7620 generic.go:334] "Generic (PLEG): container finished" podID="e2ade7e6-cecd-4e98-8f85-ea8219303d75" containerID="23c3d665afaf3cc37466eca134b1f313b3fb9bff8fd0cf090f0e4b47784dbfda" exitCode=0 Mar 18 08:48:59.369710 master-0 kubenswrapper[7620]: I0318 08:48:59.366210 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" event={"ID":"e2ade7e6-cecd-4e98-8f85-ea8219303d75","Type":"ContainerDied","Data":"23c3d665afaf3cc37466eca134b1f313b3fb9bff8fd0cf090f0e4b47784dbfda"} Mar 18 08:48:59.369710 master-0 kubenswrapper[7620]: I0318 08:48:59.368546 7620 generic.go:334] "Generic (PLEG): container finished" podID="573d3a02-e395-4816-963a-cd614ef53f75" containerID="e51fa0342ef2eca22478ce0380d3cd4446fad9cc3cda5d0c285a70b4c9b5167e" exitCode=0 Mar 18 08:48:59.369710 master-0 kubenswrapper[7620]: I0318 08:48:59.368623 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerDied","Data":"e51fa0342ef2eca22478ce0380d3cd4446fad9cc3cda5d0c285a70b4c9b5167e"} Mar 18 08:48:59.375809 master-0 kubenswrapper[7620]: I0318 08:48:59.374305 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8b7l7" event={"ID":"fc289a83-9a2e-404b-b148-605639362703","Type":"ContainerStarted","Data":"8dd2f22202335db47f3b08f66486bcba6b09c0110dc38e4a13d0ea861c3e1528"} Mar 18 08:48:59.375809 master-0 kubenswrapper[7620]: I0318 08:48:59.374351 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8b7l7" event={"ID":"fc289a83-9a2e-404b-b148-605639362703","Type":"ContainerStarted","Data":"d1339a30e998845d2411b5c92f3883b1457216fd5491cd19b8b7f3a77576f95c"} Mar 18 08:48:59.383390 master-0 kubenswrapper[7620]: I0318 08:48:59.381618 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" event={"ID":"c110b293-2c6b-496b-b015-23aada98cb4b","Type":"ContainerStarted","Data":"851a9b4a39c1a238b36e5625cadf0309e8c60fabaa4ea940ca6a7ae0197a27fb"} Mar 18 08:48:59.390029 master-0 kubenswrapper[7620]: I0318 08:48:59.388809 7620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:59.390029 master-0 kubenswrapper[7620]: I0318 08:48:59.388832 7620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:59.390029 master-0 kubenswrapper[7620]: I0318 08:48:59.389407 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" event={"ID":"6fb1f871-9c24-48a1-a15a-a636b5bb687d","Type":"ContainerStarted","Data":"5fd596a297c038b4d9eeecf0d04536fc2fce35ac352268f0986a6998d7113285"} Mar 18 08:48:59.525153 master-0 kubenswrapper[7620]: I0318 08:48:59.524062 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:59.537511 master-0 kubenswrapper[7620]: I0318 08:48:59.537467 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:49:00.326724 master-0 kubenswrapper[7620]: I0318 08:49:00.326274 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n"] Mar 18 08:49:00.326995 master-0 kubenswrapper[7620]: E0318 08:49:00.326818 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cee994-bbd7-45f2-9757-c270d47c276a" containerName="prober" Mar 18 08:49:00.326995 master-0 kubenswrapper[7620]: I0318 08:49:00.326831 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cee994-bbd7-45f2-9757-c270d47c276a" containerName="prober" Mar 18 08:49:00.326995 master-0 kubenswrapper[7620]: E0318 08:49:00.326839 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 08:49:00.326995 master-0 kubenswrapper[7620]: I0318 08:49:00.326845 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 08:49:00.326995 master-0 kubenswrapper[7620]: I0318 08:49:00.326921 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 08:49:00.326995 master-0 kubenswrapper[7620]: I0318 08:49:00.326935 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cee994-bbd7-45f2-9757-c270d47c276a" containerName="prober" Mar 18 08:49:00.328068 master-0 kubenswrapper[7620]: I0318 08:49:00.327217 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" Mar 18 08:49:00.348959 master-0 kubenswrapper[7620]: I0318 08:49:00.347426 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n"] Mar 18 08:49:00.392283 master-0 kubenswrapper[7620]: I0318 08:49:00.392085 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchll\" (UniqueName: \"kubernetes.io/projected/29ba6765-61c9-4f78-8f44-570418000c5c-kube-api-access-xchll\") pod \"csi-snapshot-controller-64854d9cff-khm5n\" (UID: \"29ba6765-61c9-4f78-8f44-570418000c5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" Mar 18 08:49:00.422152 master-0 kubenswrapper[7620]: I0318 08:49:00.422062 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" event={"ID":"8a6ab2be-d018-4fd5-bfbb-6b88aec28663","Type":"ContainerStarted","Data":"5e84b000c1316fb6659579cb173f67777226d532d34aa25b987bd230e2ca4fb7"} Mar 18 08:49:00.457835 master-0 kubenswrapper[7620]: I0318 08:49:00.455297 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" event={"ID":"260c8aa5-a288-4ee8-b671-f97e90a2f39c","Type":"ContainerStarted","Data":"42ba60928089ecdd2be6dc0bf250cb571a47fd29cfa3690db6c3f8f43ab0c4ba"} Mar 18 08:49:00.458168 master-0 kubenswrapper[7620]: I0318 08:49:00.458112 7620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:49:00.460059 master-0 kubenswrapper[7620]: I0318 08:49:00.459604 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" event={"ID":"939efa41-8f40-4f91-bee4-0425aead9760","Type":"ContainerStarted","Data":"c7bdc6ef2980045954ec06270159082d9f28baec29275922530ef4e26552cf99"} Mar 18 08:49:00.466755 master-0 kubenswrapper[7620]: I0318 08:49:00.463018 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:49:00.493990 master-0 kubenswrapper[7620]: I0318 08:49:00.493900 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchll\" (UniqueName: \"kubernetes.io/projected/29ba6765-61c9-4f78-8f44-570418000c5c-kube-api-access-xchll\") pod \"csi-snapshot-controller-64854d9cff-khm5n\" (UID: \"29ba6765-61c9-4f78-8f44-570418000c5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" Mar 18 08:49:00.521327 master-0 kubenswrapper[7620]: I0318 08:49:00.521217 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchll\" (UniqueName: \"kubernetes.io/projected/29ba6765-61c9-4f78-8f44-570418000c5c-kube-api-access-xchll\") pod \"csi-snapshot-controller-64854d9cff-khm5n\" (UID: \"29ba6765-61c9-4f78-8f44-570418000c5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" Mar 18 08:49:00.655606 master-0 kubenswrapper[7620]: I0318 08:49:00.655063 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" Mar 18 08:49:00.901143 master-0 kubenswrapper[7620]: I0318 08:49:00.901074 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:00.901143 master-0 kubenswrapper[7620]: I0318 08:49:00.901159 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901193 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901223 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901257 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901285 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901311 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901343 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901373 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901398 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901423 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901448 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:49:00.901543 master-0 kubenswrapper[7620]: I0318 08:49:00.901478 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:49:00.901890 master-0 kubenswrapper[7620]: E0318 08:49:00.901625 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:49:00.901890 master-0 kubenswrapper[7620]: E0318 08:49:00.901694 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.901675075 +0000 UTC m=+8.896456827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:49:00.902254 master-0 kubenswrapper[7620]: E0318 08:49:00.902221 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:49:00.902304 master-0 kubenswrapper[7620]: E0318 08:49:00.902266 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.902255102 +0000 UTC m=+8.897036854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:49:00.902349 master-0 kubenswrapper[7620]: E0318 08:49:00.902319 7620 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:49:00.902349 master-0 kubenswrapper[7620]: E0318 08:49:00.902346 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.902337645 +0000 UTC m=+8.897119397 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : secret "metrics-daemon-secret" not found Mar 18 08:49:00.902405 master-0 kubenswrapper[7620]: E0318 08:49:00.902396 7620 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:49:00.902438 master-0 kubenswrapper[7620]: E0318 08:49:00.902424 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.902415327 +0000 UTC m=+8.897197079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:49:00.902490 master-0 kubenswrapper[7620]: E0318 08:49:00.902469 7620 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:49:00.902522 master-0 kubenswrapper[7620]: E0318 08:49:00.902504 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.902495859 +0000 UTC m=+8.897277621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:49:00.902568 master-0 kubenswrapper[7620]: E0318 08:49:00.902550 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:49:00.902600 master-0 kubenswrapper[7620]: E0318 08:49:00.902582 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.902573462 +0000 UTC m=+8.897355214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:49:00.902679 master-0 kubenswrapper[7620]: E0318 08:49:00.902657 7620 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:49:00.902713 master-0 kubenswrapper[7620]: E0318 08:49:00.902693 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.902684575 +0000 UTC m=+8.897466327 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:49:00.902768 master-0 kubenswrapper[7620]: E0318 08:49:00.902741 7620 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:49:00.902800 master-0 kubenswrapper[7620]: E0318 08:49:00.902772 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.902764887 +0000 UTC m=+8.897546639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:49:00.902839 master-0 kubenswrapper[7620]: E0318 08:49:00.902825 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:49:00.902942 master-0 kubenswrapper[7620]: E0318 08:49:00.902902 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.90284714 +0000 UTC m=+8.897628892 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:49:00.902996 master-0 kubenswrapper[7620]: E0318 08:49:00.902955 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:49:00.902996 master-0 kubenswrapper[7620]: E0318 08:49:00.902983 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.902975464 +0000 UTC m=+8.897757216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:49:00.903062 master-0 kubenswrapper[7620]: E0318 08:49:00.903029 7620 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:49:00.903062 master-0 kubenswrapper[7620]: E0318 08:49:00.903057 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.903049396 +0000 UTC m=+8.897831148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:49:00.903121 master-0 kubenswrapper[7620]: E0318 08:49:00.903106 7620 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:49:00.903155 master-0 kubenswrapper[7620]: E0318 08:49:00.903132 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.903123338 +0000 UTC m=+8.897905090 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:49:00.903192 master-0 kubenswrapper[7620]: E0318 08:49:00.903182 7620 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:49:00.903222 master-0 kubenswrapper[7620]: E0318 08:49:00.903210 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.90320187 +0000 UTC m=+8.897983622 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:49:00.943472 master-0 kubenswrapper[7620]: I0318 08:49:00.943086 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n"] Mar 18 08:49:01.149916 master-0 kubenswrapper[7620]: I0318 08:49:01.149827 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-qqwql"] Mar 18 08:49:01.158618 master-0 kubenswrapper[7620]: I0318 08:49:01.156749 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.165965 master-0 kubenswrapper[7620]: I0318 08:49:01.161021 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:49:01.165965 master-0 kubenswrapper[7620]: I0318 08:49:01.161229 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:49:01.165965 master-0 kubenswrapper[7620]: I0318 08:49:01.161741 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-qqwql"] Mar 18 08:49:01.165965 master-0 kubenswrapper[7620]: I0318 08:49:01.163568 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:01.165965 master-0 kubenswrapper[7620]: I0318 08:49:01.164383 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:49:01.165965 master-0 kubenswrapper[7620]: I0318 08:49:01.164586 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:49:01.165965 master-0 kubenswrapper[7620]: I0318 08:49:01.164719 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:01.205704 master-0 kubenswrapper[7620]: I0318 08:49:01.205520 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.205704 master-0 kubenswrapper[7620]: I0318 08:49:01.205598 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.205704 master-0 kubenswrapper[7620]: I0318 08:49:01.205651 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngbn\" (UniqueName: \"kubernetes.io/projected/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-kube-api-access-nngbn\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.205704 master-0 kubenswrapper[7620]: I0318 08:49:01.205696 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.206161 master-0 kubenswrapper[7620]: I0318 08:49:01.205750 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.226182 master-0 kubenswrapper[7620]: I0318 08:49:01.224639 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8"] Mar 18 08:49:01.226182 master-0 kubenswrapper[7620]: I0318 08:49:01.225562 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" Mar 18 08:49:01.228241 master-0 kubenswrapper[7620]: I0318 08:49:01.228202 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 08:49:01.228420 master-0 kubenswrapper[7620]: I0318 08:49:01.228395 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 08:49:01.248959 master-0 kubenswrapper[7620]: I0318 08:49:01.248894 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8"] Mar 18 08:49:01.307010 master-0 kubenswrapper[7620]: I0318 08:49:01.306951 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bwv\" (UniqueName: \"kubernetes.io/projected/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8-kube-api-access-d2bwv\") pod \"migrator-8487694857-ld5l8\" (UID: \"8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" Mar 18 08:49:01.307201 master-0 kubenswrapper[7620]: I0318 08:49:01.307100 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.307201 master-0 kubenswrapper[7620]: I0318 08:49:01.307150 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.307276 master-0 kubenswrapper[7620]: I0318 08:49:01.307196 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngbn\" (UniqueName: \"kubernetes.io/projected/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-kube-api-access-nngbn\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.307276 master-0 kubenswrapper[7620]: I0318 08:49:01.307253 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.307339 master-0 kubenswrapper[7620]: I0318 08:49:01.307321 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.307597 master-0 kubenswrapper[7620]: E0318 08:49:01.307564 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:01.307597 master-0 kubenswrapper[7620]: E0318 08:49:01.307592 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:01.307673 master-0 kubenswrapper[7620]: E0318 08:49:01.307642 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 08:49:01.307701 master-0 kubenswrapper[7620]: E0318 08:49:01.307651 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert podName:7e0cc3a7-4bac-438b-ae67-774dc8eb39a1 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:01.807632921 +0000 UTC m=+5.802414673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert") pod "controller-manager-f5df8899c-qqwql" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1") : secret "serving-cert" not found Mar 18 08:49:01.307741 master-0 kubenswrapper[7620]: E0318 08:49:01.307702 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca podName:7e0cc3a7-4bac-438b-ae67-774dc8eb39a1 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:01.807690002 +0000 UTC m=+5.802471754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca") pod "controller-manager-f5df8899c-qqwql" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1") : configmap "client-ca" not found Mar 18 08:49:01.307741 master-0 kubenswrapper[7620]: E0318 08:49:01.307722 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles podName:7e0cc3a7-4bac-438b-ae67-774dc8eb39a1 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:01.807714283 +0000 UTC m=+5.802496035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles") pod "controller-manager-f5df8899c-qqwql" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1") : configmap "openshift-global-ca" not found Mar 18 08:49:01.307932 master-0 kubenswrapper[7620]: E0318 08:49:01.307905 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 08:49:01.308014 master-0 kubenswrapper[7620]: E0318 08:49:01.308000 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config podName:7e0cc3a7-4bac-438b-ae67-774dc8eb39a1 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:01.807976691 +0000 UTC m=+5.802758463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config") pod "controller-manager-f5df8899c-qqwql" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1") : configmap "config" not found Mar 18 08:49:01.361498 master-0 kubenswrapper[7620]: I0318 08:49:01.361425 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngbn\" (UniqueName: \"kubernetes.io/projected/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-kube-api-access-nngbn\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.409249 master-0 kubenswrapper[7620]: I0318 08:49:01.409182 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bwv\" (UniqueName: \"kubernetes.io/projected/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8-kube-api-access-d2bwv\") pod \"migrator-8487694857-ld5l8\" (UID: \"8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" Mar 18 08:49:01.463674 master-0 kubenswrapper[7620]: I0318 08:49:01.463549 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerStarted","Data":"5a2943917dc38b0012b7ecf0b0d92cb0eaf6fda9f9ba0f60f4167aa1dddca628"} Mar 18 08:49:01.467256 master-0 kubenswrapper[7620]: I0318 08:49:01.467220 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bwv\" (UniqueName: \"kubernetes.io/projected/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8-kube-api-access-d2bwv\") pod \"migrator-8487694857-ld5l8\" (UID: \"8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" Mar 18 08:49:01.546817 master-0 kubenswrapper[7620]: I0318 08:49:01.546748 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" Mar 18 08:49:01.815306 master-0 kubenswrapper[7620]: I0318 08:49:01.814924 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.815306 master-0 kubenswrapper[7620]: I0318 08:49:01.815293 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.816337 master-0 kubenswrapper[7620]: I0318 08:49:01.815361 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.816337 master-0 kubenswrapper[7620]: I0318 08:49:01.815404 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.816337 master-0 kubenswrapper[7620]: E0318 08:49:01.815526 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:01.816337 master-0 kubenswrapper[7620]: E0318 08:49:01.815585 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca podName:7e0cc3a7-4bac-438b-ae67-774dc8eb39a1 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.815569108 +0000 UTC m=+6.810350860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca") pod "controller-manager-f5df8899c-qqwql" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1") : configmap "client-ca" not found Mar 18 08:49:01.816983 master-0 kubenswrapper[7620]: I0318 08:49:01.816953 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:01.817325 master-0 kubenswrapper[7620]: E0318 08:49:01.817300 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:01.817422 master-0 kubenswrapper[7620]: E0318 08:49:01.817339 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert podName:7e0cc3a7-4bac-438b-ae67-774dc8eb39a1 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.81732993 +0000 UTC m=+6.812111672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert") pod "controller-manager-f5df8899c-qqwql" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1") : secret "serving-cert" not found Mar 18 08:49:01.817685 master-0 kubenswrapper[7620]: I0318 08:49:01.817650 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:02.277004 master-0 kubenswrapper[7620]: I0318 08:49:02.276932 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9"] Mar 18 08:49:02.278170 master-0 kubenswrapper[7620]: I0318 08:49:02.278130 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.282021 master-0 kubenswrapper[7620]: I0318 08:49:02.281969 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-qqwql"] Mar 18 08:49:02.282359 master-0 kubenswrapper[7620]: E0318 08:49:02.282302 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" podUID="7e0cc3a7-4bac-438b-ae67-774dc8eb39a1" Mar 18 08:49:02.285352 master-0 kubenswrapper[7620]: I0318 08:49:02.285290 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 08:49:02.285833 master-0 kubenswrapper[7620]: I0318 08:49:02.285791 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 08:49:02.286280 master-0 kubenswrapper[7620]: I0318 08:49:02.286236 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:02.289888 master-0 kubenswrapper[7620]: I0318 08:49:02.289227 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 08:49:02.303161 master-0 kubenswrapper[7620]: I0318 08:49:02.303064 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9"] Mar 18 08:49:02.320520 master-0 kubenswrapper[7620]: I0318 08:49:02.309611 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:02.326651 master-0 kubenswrapper[7620]: I0318 08:49:02.326483 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.326651 master-0 kubenswrapper[7620]: I0318 08:49:02.326573 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.326651 master-0 kubenswrapper[7620]: I0318 08:49:02.326611 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b656\" (UniqueName: \"kubernetes.io/projected/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-kube-api-access-4b656\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.326966 master-0 kubenswrapper[7620]: I0318 08:49:02.326680 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-config\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: I0318 08:49:02.430434 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: I0318 08:49:02.430491 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: I0318 08:49:02.430508 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b656\" (UniqueName: \"kubernetes.io/projected/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-kube-api-access-4b656\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: I0318 08:49:02.430542 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-config\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: I0318 08:49:02.431433 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-config\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: E0318 08:49:02.431523 7620 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: E0318 08:49:02.431562 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.931549437 +0000 UTC m=+6.926331189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : secret "serving-cert" not found Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: E0318 08:49:02.431895 7620 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:02.432876 master-0 kubenswrapper[7620]: E0318 08:49:02.431958 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:02.931921808 +0000 UTC m=+6.926703560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : configmap "client-ca" not found Mar 18 08:49:02.470880 master-0 kubenswrapper[7620]: I0318 08:49:02.470524 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:02.478835 master-0 kubenswrapper[7620]: I0318 08:49:02.478721 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:02.511840 master-0 kubenswrapper[7620]: I0318 08:49:02.511788 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b656\" (UniqueName: \"kubernetes.io/projected/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-kube-api-access-4b656\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.531636 master-0 kubenswrapper[7620]: I0318 08:49:02.531520 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config\") pod \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " Mar 18 08:49:02.531800 master-0 kubenswrapper[7620]: I0318 08:49:02.531751 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nngbn\" (UniqueName: \"kubernetes.io/projected/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-kube-api-access-nngbn\") pod \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " Mar 18 08:49:02.531800 master-0 kubenswrapper[7620]: I0318 08:49:02.531798 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles\") pod \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " Mar 18 08:49:02.532499 master-0 kubenswrapper[7620]: I0318 08:49:02.532115 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config" (OuterVolumeSpecName: "config") pod "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:02.532499 master-0 kubenswrapper[7620]: I0318 08:49:02.532410 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:02.532575 master-0 kubenswrapper[7620]: I0318 08:49:02.532494 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:02.535395 master-0 kubenswrapper[7620]: I0318 08:49:02.535351 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-kube-api-access-nngbn" (OuterVolumeSpecName: "kube-api-access-nngbn") pod "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1"). InnerVolumeSpecName "kube-api-access-nngbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:02.633906 master-0 kubenswrapper[7620]: I0318 08:49:02.633793 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nngbn\" (UniqueName: \"kubernetes.io/projected/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-kube-api-access-nngbn\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:02.633906 master-0 kubenswrapper[7620]: I0318 08:49:02.633904 7620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:02.837399 master-0 kubenswrapper[7620]: I0318 08:49:02.837272 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:02.837399 master-0 kubenswrapper[7620]: I0318 08:49:02.837357 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca\") pod \"controller-manager-f5df8899c-qqwql\" (UID: \"7e0cc3a7-4bac-438b-ae67-774dc8eb39a1\") " pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:02.837963 master-0 kubenswrapper[7620]: E0318 08:49:02.837555 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:02.837963 master-0 kubenswrapper[7620]: E0318 08:49:02.837678 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:02.837963 master-0 kubenswrapper[7620]: E0318 08:49:02.837683 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert podName:7e0cc3a7-4bac-438b-ae67-774dc8eb39a1 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.837640047 +0000 UTC m=+8.832421789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert") pod "controller-manager-f5df8899c-qqwql" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1") : secret "serving-cert" not found Mar 18 08:49:02.837963 master-0 kubenswrapper[7620]: E0318 08:49:02.837734 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca podName:7e0cc3a7-4bac-438b-ae67-774dc8eb39a1 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.837716609 +0000 UTC m=+8.832498361 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca") pod "controller-manager-f5df8899c-qqwql" (UID: "7e0cc3a7-4bac-438b-ae67-774dc8eb39a1") : configmap "client-ca" not found Mar 18 08:49:02.938538 master-0 kubenswrapper[7620]: I0318 08:49:02.938482 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.938538 master-0 kubenswrapper[7620]: I0318 08:49:02.938542 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:02.939070 master-0 kubenswrapper[7620]: E0318 08:49:02.938763 7620 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:02.939070 master-0 kubenswrapper[7620]: E0318 08:49:02.938880 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:03.938858855 +0000 UTC m=+7.933640607 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : secret "serving-cert" not found Mar 18 08:49:02.939070 master-0 kubenswrapper[7620]: E0318 08:49:02.938946 7620 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:02.939070 master-0 kubenswrapper[7620]: E0318 08:49:02.939040 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:03.93902187 +0000 UTC m=+7.933803622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : configmap "client-ca" not found Mar 18 08:49:03.386058 master-0 kubenswrapper[7620]: I0318 08:49:03.385974 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:49:03.474512 master-0 kubenswrapper[7620]: I0318 08:49:03.474450 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-qqwql" Mar 18 08:49:03.507422 master-0 kubenswrapper[7620]: I0318 08:49:03.507356 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc"] Mar 18 08:49:03.508159 master-0 kubenswrapper[7620]: I0318 08:49:03.508102 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.511725 master-0 kubenswrapper[7620]: I0318 08:49:03.510553 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:49:03.511725 master-0 kubenswrapper[7620]: I0318 08:49:03.510896 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:49:03.511725 master-0 kubenswrapper[7620]: I0318 08:49:03.511044 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:49:03.517203 master-0 kubenswrapper[7620]: I0318 08:49:03.516538 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:03.517203 master-0 kubenswrapper[7620]: I0318 08:49:03.516876 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:03.524091 master-0 kubenswrapper[7620]: I0318 08:49:03.523751 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-qqwql"] Mar 18 08:49:03.530763 master-0 kubenswrapper[7620]: I0318 08:49:03.530315 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc"] Mar 18 08:49:03.532219 master-0 kubenswrapper[7620]: I0318 08:49:03.532157 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:49:03.542410 master-0 kubenswrapper[7620]: I0318 08:49:03.542333 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-qqwql"] Mar 18 08:49:03.548802 master-0 kubenswrapper[7620]: I0318 08:49:03.548407 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz75w\" (UniqueName: \"kubernetes.io/projected/755f0a10-8da7-40e9-8494-e99914a4df1a-kube-api-access-kz75w\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.548802 master-0 kubenswrapper[7620]: I0318 08:49:03.548540 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.548802 master-0 kubenswrapper[7620]: I0318 08:49:03.548587 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-config\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.548802 master-0 kubenswrapper[7620]: I0318 08:49:03.548687 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.548802 master-0 kubenswrapper[7620]: I0318 08:49:03.548739 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-proxy-ca-bundles\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.612680 master-0 kubenswrapper[7620]: I0318 08:49:03.612635 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc"] Mar 18 08:49:03.613009 master-0 kubenswrapper[7620]: E0318 08:49:03.612944 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-kz75w proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" podUID="755f0a10-8da7-40e9-8494-e99914a4df1a" Mar 18 08:49:03.651672 master-0 kubenswrapper[7620]: I0318 08:49:03.651559 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz75w\" (UniqueName: \"kubernetes.io/projected/755f0a10-8da7-40e9-8494-e99914a4df1a-kube-api-access-kz75w\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.651672 master-0 kubenswrapper[7620]: I0318 08:49:03.651654 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.651928 master-0 kubenswrapper[7620]: I0318 08:49:03.651686 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-config\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.651928 master-0 kubenswrapper[7620]: I0318 08:49:03.651740 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.651928 master-0 kubenswrapper[7620]: I0318 08:49:03.651755 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-proxy-ca-bundles\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.651928 master-0 kubenswrapper[7620]: I0318 08:49:03.651786 7620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:03.651928 master-0 kubenswrapper[7620]: I0318 08:49:03.651798 7620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:03.653273 master-0 kubenswrapper[7620]: I0318 08:49:03.653249 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-proxy-ca-bundles\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.653576 master-0 kubenswrapper[7620]: E0318 08:49:03.653549 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:03.653637 master-0 kubenswrapper[7620]: E0318 08:49:03.653592 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca podName:755f0a10-8da7-40e9-8494-e99914a4df1a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.153579668 +0000 UTC m=+8.148361420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca") pod "controller-manager-6b4fdf4c78-scvbc" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a") : configmap "client-ca" not found Mar 18 08:49:03.654236 master-0 kubenswrapper[7620]: E0318 08:49:03.654180 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:03.654324 master-0 kubenswrapper[7620]: E0318 08:49:03.654292 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert podName:755f0a10-8da7-40e9-8494-e99914a4df1a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:04.154267199 +0000 UTC m=+8.149048961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert") pod "controller-manager-6b4fdf4c78-scvbc" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a") : secret "serving-cert" not found Mar 18 08:49:03.654324 master-0 kubenswrapper[7620]: I0318 08:49:03.654319 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-config\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.679907 master-0 kubenswrapper[7620]: I0318 08:49:03.679840 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz75w\" (UniqueName: \"kubernetes.io/projected/755f0a10-8da7-40e9-8494-e99914a4df1a-kube-api-access-kz75w\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:03.961183 master-0 kubenswrapper[7620]: I0318 08:49:03.961057 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:03.961183 master-0 kubenswrapper[7620]: I0318 08:49:03.961126 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:03.961698 master-0 kubenswrapper[7620]: E0318 08:49:03.961419 7620 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:03.961698 master-0 kubenswrapper[7620]: E0318 08:49:03.961500 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:05.9614782 +0000 UTC m=+9.956259962 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : configmap "client-ca" not found Mar 18 08:49:03.962124 master-0 kubenswrapper[7620]: E0318 08:49:03.962101 7620 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:03.962179 master-0 kubenswrapper[7620]: E0318 08:49:03.962148 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:05.96213653 +0000 UTC m=+9.956918292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : secret "serving-cert" not found Mar 18 08:49:04.080033 master-0 kubenswrapper[7620]: I0318 08:49:04.079987 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:49:04.085047 master-0 kubenswrapper[7620]: I0318 08:49:04.085016 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:49:04.163484 master-0 kubenswrapper[7620]: I0318 08:49:04.163433 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:04.163655 master-0 kubenswrapper[7620]: I0318 08:49:04.163621 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:04.163764 master-0 kubenswrapper[7620]: E0318 08:49:04.163733 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:04.163810 master-0 kubenswrapper[7620]: E0318 08:49:04.163800 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca podName:755f0a10-8da7-40e9-8494-e99914a4df1a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:05.163780973 +0000 UTC m=+9.158562735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca") pod "controller-manager-6b4fdf4c78-scvbc" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a") : configmap "client-ca" not found Mar 18 08:49:04.164464 master-0 kubenswrapper[7620]: E0318 08:49:04.164398 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:04.165096 master-0 kubenswrapper[7620]: E0318 08:49:04.164623 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert podName:755f0a10-8da7-40e9-8494-e99914a4df1a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:05.164589737 +0000 UTC m=+9.159371529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert") pod "controller-manager-6b4fdf4c78-scvbc" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a") : secret "serving-cert" not found Mar 18 08:49:04.246429 master-0 kubenswrapper[7620]: I0318 08:49:04.246014 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0cc3a7-4bac-438b-ae67-774dc8eb39a1" path="/var/lib/kubelet/pods/7e0cc3a7-4bac-438b-ae67-774dc8eb39a1/volumes" Mar 18 08:49:04.370022 master-0 kubenswrapper[7620]: I0318 08:49:04.369428 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8"] Mar 18 08:49:04.479481 master-0 kubenswrapper[7620]: I0318 08:49:04.479402 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:04.485388 master-0 kubenswrapper[7620]: I0318 08:49:04.485334 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:49:04.489171 master-0 kubenswrapper[7620]: I0318 08:49:04.489039 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:04.568370 master-0 kubenswrapper[7620]: I0318 08:49:04.568217 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz75w\" (UniqueName: \"kubernetes.io/projected/755f0a10-8da7-40e9-8494-e99914a4df1a-kube-api-access-kz75w\") pod \"755f0a10-8da7-40e9-8494-e99914a4df1a\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " Mar 18 08:49:04.568370 master-0 kubenswrapper[7620]: I0318 08:49:04.568325 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-proxy-ca-bundles\") pod \"755f0a10-8da7-40e9-8494-e99914a4df1a\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " Mar 18 08:49:04.568687 master-0 kubenswrapper[7620]: I0318 08:49:04.568400 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-config\") pod \"755f0a10-8da7-40e9-8494-e99914a4df1a\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " Mar 18 08:49:04.569477 master-0 kubenswrapper[7620]: I0318 08:49:04.569444 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "755f0a10-8da7-40e9-8494-e99914a4df1a" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:04.569477 master-0 kubenswrapper[7620]: I0318 08:49:04.569463 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-config" (OuterVolumeSpecName: "config") pod "755f0a10-8da7-40e9-8494-e99914a4df1a" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:04.582519 master-0 kubenswrapper[7620]: I0318 08:49:04.582482 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755f0a10-8da7-40e9-8494-e99914a4df1a-kube-api-access-kz75w" (OuterVolumeSpecName: "kube-api-access-kz75w") pod "755f0a10-8da7-40e9-8494-e99914a4df1a" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a"). InnerVolumeSpecName "kube-api-access-kz75w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:04.634194 master-0 kubenswrapper[7620]: I0318 08:49:04.634141 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:49:04.669953 master-0 kubenswrapper[7620]: I0318 08:49:04.669892 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz75w\" (UniqueName: \"kubernetes.io/projected/755f0a10-8da7-40e9-8494-e99914a4df1a-kube-api-access-kz75w\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:04.669953 master-0 kubenswrapper[7620]: I0318 08:49:04.669928 7620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:04.669953 master-0 kubenswrapper[7620]: I0318 08:49:04.669940 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:04.696674 master-0 kubenswrapper[7620]: W0318 08:49:04.696389 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ebaeb8d_8fbd_4638_9516_fc4e90ba2fa8.slice/crio-d3d8011493c530c7726e87839672927a640cefde6cc363dd89bea6af846b7008 WatchSource:0}: Error finding container d3d8011493c530c7726e87839672927a640cefde6cc363dd89bea6af846b7008: Status 404 returned error can't find the container with id d3d8011493c530c7726e87839672927a640cefde6cc363dd89bea6af846b7008 Mar 18 08:49:04.981773 master-0 kubenswrapper[7620]: I0318 08:49:04.981702 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:49:04.981773 master-0 kubenswrapper[7620]: I0318 08:49:04.981771 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.981819 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.981843 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.981896 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.981908 7620 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.981990 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.981967672 +0000 UTC m=+16.976749424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982101 7620 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.982148 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.982191 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982228 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982213959 +0000 UTC m=+16.976995711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982270 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982311 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982298192 +0000 UTC m=+16.977079944 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982275 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982337 7620 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982364 7620 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982358 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982349883 +0000 UTC m=+16.977131635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982431 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982434 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982406675 +0000 UTC m=+16.977188647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982495 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert podName:3d0b7f60-c32e-48a6-b9e9-87c8f018367d nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982482517 +0000 UTC m=+16.977264539 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert") pod "cluster-version-operator-56d8475767-2xjqg" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982509 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982502468 +0000 UTC m=+16.977284570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.982555 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.982584 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.982618 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982633 7620 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.982641 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982669 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982657123 +0000 UTC m=+16.977438875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.982689 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: I0318 08:49:04.982718 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982759 7620 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982792 7620 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982794 7620 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982814 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982797157 +0000 UTC m=+16.977579099 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982838 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982848 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982831088 +0000 UTC m=+16.977613090 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982889 7620 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982900 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls podName:bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982886299 +0000 UTC m=+16.977668291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-tj9b9" (UID: "bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a") : secret "node-tuning-operator-tls" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982928 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.98291499 +0000 UTC m=+16.977697002 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:49:04.983212 master-0 kubenswrapper[7620]: E0318 08:49:04.982953 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.982940231 +0000 UTC m=+16.977721993 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : secret "metrics-daemon-secret" not found Mar 18 08:49:05.185629 master-0 kubenswrapper[7620]: I0318 08:49:05.185450 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:05.185922 master-0 kubenswrapper[7620]: E0318 08:49:05.185645 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:05.185922 master-0 kubenswrapper[7620]: E0318 08:49:05.185747 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca podName:755f0a10-8da7-40e9-8494-e99914a4df1a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:07.185718808 +0000 UTC m=+11.180500770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca") pod "controller-manager-6b4fdf4c78-scvbc" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a") : configmap "client-ca" not found Mar 18 08:49:05.185922 master-0 kubenswrapper[7620]: I0318 08:49:05.185778 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert\") pod \"controller-manager-6b4fdf4c78-scvbc\" (UID: \"755f0a10-8da7-40e9-8494-e99914a4df1a\") " pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:05.186017 master-0 kubenswrapper[7620]: E0318 08:49:05.185979 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:05.186045 master-0 kubenswrapper[7620]: E0318 08:49:05.186019 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert podName:755f0a10-8da7-40e9-8494-e99914a4df1a nodeName:}" failed. No retries permitted until 2026-03-18 08:49:07.186009797 +0000 UTC m=+11.180791549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert") pod "controller-manager-6b4fdf4c78-scvbc" (UID: "755f0a10-8da7-40e9-8494-e99914a4df1a") : secret "serving-cert" not found Mar 18 08:49:05.259883 master-0 kubenswrapper[7620]: I0318 08:49:05.259745 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:49:05.260169 master-0 kubenswrapper[7620]: I0318 08:49:05.260155 7620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:49:05.260169 master-0 kubenswrapper[7620]: I0318 08:49:05.260170 7620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:49:05.325888 master-0 kubenswrapper[7620]: I0318 08:49:05.325305 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:49:05.485513 master-0 kubenswrapper[7620]: I0318 08:49:05.485227 7620 generic.go:334] "Generic (PLEG): container finished" podID="e2ade7e6-cecd-4e98-8f85-ea8219303d75" containerID="2966e21e324cf74e9b19c0ead035010d27be318a44ea8cb0c4864e39d4076171" exitCode=0 Mar 18 08:49:05.485513 master-0 kubenswrapper[7620]: I0318 08:49:05.485316 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" event={"ID":"e2ade7e6-cecd-4e98-8f85-ea8219303d75","Type":"ContainerDied","Data":"2966e21e324cf74e9b19c0ead035010d27be318a44ea8cb0c4864e39d4076171"} Mar 18 08:49:05.489065 master-0 kubenswrapper[7620]: I0318 08:49:05.488661 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerStarted","Data":"f62239815e692aa3c0449919f3f1826c911a4a455ec560cd817c662d02c7a9ae"} Mar 18 08:49:05.492798 master-0 kubenswrapper[7620]: I0318 08:49:05.492709 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" event={"ID":"b0280499-8277-46f0-bd8c-058a47a99e19","Type":"ContainerStarted","Data":"76b00b2da24613bfa7eda95194ecd9d40e69d00311f7e279f85c5936ce0d7e4d"} Mar 18 08:49:05.494084 master-0 kubenswrapper[7620]: I0318 08:49:05.493986 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" event={"ID":"8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8","Type":"ContainerStarted","Data":"d3d8011493c530c7726e87839672927a640cefde6cc363dd89bea6af846b7008"} Mar 18 08:49:05.496220 master-0 kubenswrapper[7620]: I0318 08:49:05.496185 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerStarted","Data":"4bd8b99a6f02b5537643630112eefdd3136e85b5e17843dfdadb3cf7528eedf7"} Mar 18 08:49:05.496357 master-0 kubenswrapper[7620]: I0318 08:49:05.496239 7620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:49:05.496357 master-0 kubenswrapper[7620]: I0318 08:49:05.496333 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc" Mar 18 08:49:05.539270 master-0 kubenswrapper[7620]: I0318 08:49:05.538197 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" podStartSLOduration=1.7564818660000001 podStartE2EDuration="5.538157878s" podCreationTimestamp="2026-03-18 08:49:00 +0000 UTC" firstStartedPulling="2026-03-18 08:49:00.980004211 +0000 UTC m=+4.974785963" lastFinishedPulling="2026-03-18 08:49:04.761680213 +0000 UTC m=+8.756461975" observedRunningTime="2026-03-18 08:49:05.538150498 +0000 UTC m=+9.532932260" watchObservedRunningTime="2026-03-18 08:49:05.538157878 +0000 UTC m=+9.532939630" Mar 18 08:49:05.601674 master-0 kubenswrapper[7620]: I0318 08:49:05.598986 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74ff5587d8-4g47k"] Mar 18 08:49:05.601674 master-0 kubenswrapper[7620]: I0318 08:49:05.599892 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.602373 master-0 kubenswrapper[7620]: I0318 08:49:05.602102 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc"] Mar 18 08:49:05.613779 master-0 kubenswrapper[7620]: I0318 08:49:05.612135 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:49:05.613779 master-0 kubenswrapper[7620]: I0318 08:49:05.612379 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:05.613779 master-0 kubenswrapper[7620]: I0318 08:49:05.612479 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:49:05.613779 master-0 kubenswrapper[7620]: I0318 08:49:05.612612 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:05.630821 master-0 kubenswrapper[7620]: I0318 08:49:05.626634 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:49:05.630821 master-0 kubenswrapper[7620]: I0318 08:49:05.628323 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:49:05.639517 master-0 kubenswrapper[7620]: I0318 08:49:05.639465 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74ff5587d8-4g47k"] Mar 18 08:49:05.648991 master-0 kubenswrapper[7620]: I0318 08:49:05.641566 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b4fdf4c78-scvbc"] Mar 18 08:49:05.701612 master-0 kubenswrapper[7620]: I0318 08:49:05.700357 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-config\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.701612 master-0 kubenswrapper[7620]: I0318 08:49:05.700529 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.701612 master-0 kubenswrapper[7620]: I0318 08:49:05.700778 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-proxy-ca-bundles\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.701612 master-0 kubenswrapper[7620]: I0318 08:49:05.700896 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.701612 master-0 kubenswrapper[7620]: I0318 08:49:05.700958 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtrw\" (UniqueName: \"kubernetes.io/projected/cb9b74f8-6ea7-40cd-8b69-342972ab8889-kube-api-access-4xtrw\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.701612 master-0 kubenswrapper[7620]: I0318 08:49:05.701049 7620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755f0a10-8da7-40e9-8494-e99914a4df1a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:05.701612 master-0 kubenswrapper[7620]: I0318 08:49:05.701097 7620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/755f0a10-8da7-40e9-8494-e99914a4df1a-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:05.802120 master-0 kubenswrapper[7620]: I0318 08:49:05.801925 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-config\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.802120 master-0 kubenswrapper[7620]: I0318 08:49:05.802009 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.802120 master-0 kubenswrapper[7620]: I0318 08:49:05.802128 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-proxy-ca-bundles\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.802521 master-0 kubenswrapper[7620]: I0318 08:49:05.802270 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.802521 master-0 kubenswrapper[7620]: I0318 08:49:05.802334 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtrw\" (UniqueName: \"kubernetes.io/projected/cb9b74f8-6ea7-40cd-8b69-342972ab8889-kube-api-access-4xtrw\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.803657 master-0 kubenswrapper[7620]: I0318 08:49:05.803593 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-config\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.803824 master-0 kubenswrapper[7620]: E0318 08:49:05.803785 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:05.803899 master-0 kubenswrapper[7620]: E0318 08:49:05.803882 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:06.303839191 +0000 UTC m=+10.298620953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : secret "serving-cert" not found Mar 18 08:49:05.804675 master-0 kubenswrapper[7620]: I0318 08:49:05.804190 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-proxy-ca-bundles\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:05.804675 master-0 kubenswrapper[7620]: E0318 08:49:05.804294 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:05.804675 master-0 kubenswrapper[7620]: E0318 08:49:05.804342 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:06.304327036 +0000 UTC m=+10.299108788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : configmap "client-ca" not found Mar 18 08:49:05.823436 master-0 kubenswrapper[7620]: I0318 08:49:05.823370 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtrw\" (UniqueName: \"kubernetes.io/projected/cb9b74f8-6ea7-40cd-8b69-342972ab8889-kube-api-access-4xtrw\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:06.009122 master-0 kubenswrapper[7620]: I0318 08:49:06.007956 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:06.009122 master-0 kubenswrapper[7620]: E0318 08:49:06.008175 7620 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:06.009122 master-0 kubenswrapper[7620]: E0318 08:49:06.008238 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:10.008218636 +0000 UTC m=+14.003000388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : configmap "client-ca" not found Mar 18 08:49:06.009122 master-0 kubenswrapper[7620]: I0318 08:49:06.008495 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:06.009122 master-0 kubenswrapper[7620]: E0318 08:49:06.008677 7620 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:06.009122 master-0 kubenswrapper[7620]: E0318 08:49:06.008780 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:10.008760082 +0000 UTC m=+14.003541834 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : secret "serving-cert" not found Mar 18 08:49:06.232083 master-0 kubenswrapper[7620]: I0318 08:49:06.232019 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755f0a10-8da7-40e9-8494-e99914a4df1a" path="/var/lib/kubelet/pods/755f0a10-8da7-40e9-8494-e99914a4df1a/volumes" Mar 18 08:49:06.315345 master-0 kubenswrapper[7620]: I0318 08:49:06.315263 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:06.315687 master-0 kubenswrapper[7620]: E0318 08:49:06.315465 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:06.315687 master-0 kubenswrapper[7620]: E0318 08:49:06.315599 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:07.315565671 +0000 UTC m=+11.310347623 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : configmap "client-ca" not found Mar 18 08:49:06.315999 master-0 kubenswrapper[7620]: I0318 08:49:06.315937 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:06.316269 master-0 kubenswrapper[7620]: E0318 08:49:06.316169 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:06.316353 master-0 kubenswrapper[7620]: E0318 08:49:06.316316 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:07.316275282 +0000 UTC m=+11.311057214 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : secret "serving-cert" not found Mar 18 08:49:07.332112 master-0 kubenswrapper[7620]: I0318 08:49:07.331770 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:07.333147 master-0 kubenswrapper[7620]: E0318 08:49:07.332044 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:07.333147 master-0 kubenswrapper[7620]: I0318 08:49:07.332213 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:07.333147 master-0 kubenswrapper[7620]: E0318 08:49:07.332284 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:09.332256809 +0000 UTC m=+13.327038771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : configmap "client-ca" not found Mar 18 08:49:07.333147 master-0 kubenswrapper[7620]: E0318 08:49:07.332418 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:07.333147 master-0 kubenswrapper[7620]: E0318 08:49:07.332558 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:09.332524317 +0000 UTC m=+13.327306109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : secret "serving-cert" not found Mar 18 08:49:07.511612 master-0 kubenswrapper[7620]: I0318 08:49:07.511532 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" event={"ID":"8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8","Type":"ContainerStarted","Data":"2f2991c0d888b02338da203622365116c2cfcfc9bd10d899f71bcc11fc35c541"} Mar 18 08:49:07.511612 master-0 kubenswrapper[7620]: I0318 08:49:07.511616 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" event={"ID":"8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8","Type":"ContainerStarted","Data":"76dda3d8e2e6c365afdc3e97a4f27515502d76967efe61be8618ed9ded8f9540"} Mar 18 08:49:07.947406 master-0 kubenswrapper[7620]: I0318 08:49:07.947350 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:49:07.953082 master-0 kubenswrapper[7620]: I0318 08:49:07.953044 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:49:07.968963 master-0 kubenswrapper[7620]: I0318 08:49:07.968353 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" podStartSLOduration=5.313613632 podStartE2EDuration="6.968327097s" podCreationTimestamp="2026-03-18 08:49:01 +0000 UTC" firstStartedPulling="2026-03-18 08:49:04.706666912 +0000 UTC m=+8.701448664" lastFinishedPulling="2026-03-18 08:49:06.361380347 +0000 UTC m=+10.356162129" observedRunningTime="2026-03-18 08:49:07.850171964 +0000 UTC m=+11.844953736" watchObservedRunningTime="2026-03-18 08:49:07.968327097 +0000 UTC m=+11.963108859" Mar 18 08:49:08.402222 master-0 kubenswrapper[7620]: I0318 08:49:08.402170 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:49:08.403169 master-0 kubenswrapper[7620]: I0318 08:49:08.402365 7620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:49:08.427949 master-0 kubenswrapper[7620]: I0318 08:49:08.427901 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 08:49:08.501890 master-0 kubenswrapper[7620]: I0318 08:49:08.501818 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-5jj7d"] Mar 18 08:49:08.502476 master-0 kubenswrapper[7620]: I0318 08:49:08.502454 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.504428 master-0 kubenswrapper[7620]: I0318 08:49:08.504191 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 08:49:08.504711 master-0 kubenswrapper[7620]: I0318 08:49:08.504582 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 08:49:08.505092 master-0 kubenswrapper[7620]: I0318 08:49:08.505067 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 08:49:08.507513 master-0 kubenswrapper[7620]: I0318 08:49:08.507451 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 08:49:08.548622 master-0 kubenswrapper[7620]: I0318 08:49:08.548537 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-5jj7d"] Mar 18 08:49:08.553363 master-0 kubenswrapper[7620]: I0318 08:49:08.553318 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-key\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.553524 master-0 kubenswrapper[7620]: I0318 08:49:08.553492 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbjp\" (UniqueName: \"kubernetes.io/projected/fa8f1797-0219-49fe-82b5-7416cc481c3a-kube-api-access-njbjp\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.553524 master-0 kubenswrapper[7620]: I0318 08:49:08.553519 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-cabundle\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.654630 master-0 kubenswrapper[7620]: I0318 08:49:08.654487 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njbjp\" (UniqueName: \"kubernetes.io/projected/fa8f1797-0219-49fe-82b5-7416cc481c3a-kube-api-access-njbjp\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.654630 master-0 kubenswrapper[7620]: I0318 08:49:08.654580 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-cabundle\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.654630 master-0 kubenswrapper[7620]: I0318 08:49:08.654629 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-key\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.657400 master-0 kubenswrapper[7620]: I0318 08:49:08.657365 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-cabundle\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.666824 master-0 kubenswrapper[7620]: I0318 08:49:08.666780 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-key\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.764872 master-0 kubenswrapper[7620]: I0318 08:49:08.764791 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njbjp\" (UniqueName: \"kubernetes.io/projected/fa8f1797-0219-49fe-82b5-7416cc481c3a-kube-api-access-njbjp\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:08.821686 master-0 kubenswrapper[7620]: I0318 08:49:08.821591 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 08:49:09.212385 master-0 kubenswrapper[7620]: I0318 08:49:09.211830 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-5jj7d"] Mar 18 08:49:09.222631 master-0 kubenswrapper[7620]: W0318 08:49:09.222566 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa8f1797_0219_49fe_82b5_7416cc481c3a.slice/crio-95171c03fc7a28cf1acc6d32a99defa7481a42e7b61b5f5262deb3933da18ccc WatchSource:0}: Error finding container 95171c03fc7a28cf1acc6d32a99defa7481a42e7b61b5f5262deb3933da18ccc: Status 404 returned error can't find the container with id 95171c03fc7a28cf1acc6d32a99defa7481a42e7b61b5f5262deb3933da18ccc Mar 18 08:49:09.362378 master-0 kubenswrapper[7620]: I0318 08:49:09.362330 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:09.362560 master-0 kubenswrapper[7620]: E0318 08:49:09.362525 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:09.362625 master-0 kubenswrapper[7620]: E0318 08:49:09.362598 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:13.362577665 +0000 UTC m=+17.357359417 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : configmap "client-ca" not found Mar 18 08:49:09.363017 master-0 kubenswrapper[7620]: I0318 08:49:09.362957 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:09.363463 master-0 kubenswrapper[7620]: E0318 08:49:09.363428 7620 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:09.363588 master-0 kubenswrapper[7620]: E0318 08:49:09.363561 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:13.363526093 +0000 UTC m=+17.358307995 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : secret "serving-cert" not found Mar 18 08:49:09.527074 master-0 kubenswrapper[7620]: I0318 08:49:09.527022 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" event={"ID":"e2ade7e6-cecd-4e98-8f85-ea8219303d75","Type":"ContainerStarted","Data":"77402342b68e7cb4ec7ebd972b9ac7442e45f3236ab9cfbb373363dfbf591b0c"} Mar 18 08:49:09.532559 master-0 kubenswrapper[7620]: I0318 08:49:09.532500 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" event={"ID":"fa8f1797-0219-49fe-82b5-7416cc481c3a","Type":"ContainerStarted","Data":"7795cdf67d063e624942ade3e80ed1a93f6154e0fae14b2ebb71530afc86e742"} Mar 18 08:49:09.532619 master-0 kubenswrapper[7620]: I0318 08:49:09.532592 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" event={"ID":"fa8f1797-0219-49fe-82b5-7416cc481c3a","Type":"ContainerStarted","Data":"95171c03fc7a28cf1acc6d32a99defa7481a42e7b61b5f5262deb3933da18ccc"} Mar 18 08:49:09.564872 master-0 kubenswrapper[7620]: I0318 08:49:09.563939 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" podStartSLOduration=1.563913559 podStartE2EDuration="1.563913559s" podCreationTimestamp="2026-03-18 08:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:09.563168667 +0000 UTC m=+13.557950499" watchObservedRunningTime="2026-03-18 08:49:09.563913559 +0000 UTC m=+13.558695311" Mar 18 08:49:10.079694 master-0 kubenswrapper[7620]: I0318 08:49:10.079602 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:10.079694 master-0 kubenswrapper[7620]: I0318 08:49:10.079699 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:10.080058 master-0 kubenswrapper[7620]: E0318 08:49:10.079793 7620 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:10.080058 master-0 kubenswrapper[7620]: E0318 08:49:10.079793 7620 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:10.080058 master-0 kubenswrapper[7620]: E0318 08:49:10.079895 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:18.079870725 +0000 UTC m=+22.074652487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : configmap "client-ca" not found Mar 18 08:49:10.080058 master-0 kubenswrapper[7620]: E0318 08:49:10.079913 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:18.079905866 +0000 UTC m=+22.074687628 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : secret "serving-cert" not found Mar 18 08:49:11.822677 master-0 kubenswrapper[7620]: I0318 08:49:11.822640 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6b9bdc7688-sdd9g"] Mar 18 08:49:11.824159 master-0 kubenswrapper[7620]: I0318 08:49:11.824143 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.827157 master-0 kubenswrapper[7620]: I0318 08:49:11.827102 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 08:49:11.827243 master-0 kubenswrapper[7620]: I0318 08:49:11.827218 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 08:49:11.827346 master-0 kubenswrapper[7620]: I0318 08:49:11.827271 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 08:49:11.827824 master-0 kubenswrapper[7620]: I0318 08:49:11.827777 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 08:49:11.828115 master-0 kubenswrapper[7620]: I0318 08:49:11.828083 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 18 08:49:11.828264 master-0 kubenswrapper[7620]: I0318 08:49:11.828222 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 08:49:11.828773 master-0 kubenswrapper[7620]: I0318 08:49:11.828743 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 08:49:11.834712 master-0 kubenswrapper[7620]: I0318 08:49:11.834674 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 08:49:11.848779 master-0 kubenswrapper[7620]: I0318 08:49:11.848717 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 08:49:11.853941 master-0 kubenswrapper[7620]: I0318 08:49:11.853911 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6b9bdc7688-sdd9g"] Mar 18 08:49:11.854489 master-0 kubenswrapper[7620]: I0318 08:49:11.854427 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 18 08:49:11.906545 master-0 kubenswrapper[7620]: I0318 08:49:11.906480 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-image-import-ca\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.906764 master-0 kubenswrapper[7620]: I0318 08:49:11.906589 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-trusted-ca-bundle\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.906764 master-0 kubenswrapper[7620]: I0318 08:49:11.906638 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.906764 master-0 kubenswrapper[7620]: I0318 08:49:11.906695 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/616f5762-d98a-4d54-9390-5201a2c94ba2-kube-api-access-bmtf2\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.907026 master-0 kubenswrapper[7620]: I0318 08:49:11.906965 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-encryption-config\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.907078 master-0 kubenswrapper[7620]: I0318 08:49:11.907053 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-config\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.907137 master-0 kubenswrapper[7620]: I0318 08:49:11.907109 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-serving-ca\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.907177 master-0 kubenswrapper[7620]: I0318 08:49:11.907158 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-client\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.907251 master-0 kubenswrapper[7620]: I0318 08:49:11.907224 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-serving-cert\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.907294 master-0 kubenswrapper[7620]: I0318 08:49:11.907273 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-node-pullsecrets\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:11.907364 master-0 kubenswrapper[7620]: I0318 08:49:11.907322 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-audit-dir\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.008249 master-0 kubenswrapper[7620]: I0318 08:49:12.008205 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-encryption-config\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.008581 master-0 kubenswrapper[7620]: I0318 08:49:12.008561 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-config\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.008705 master-0 kubenswrapper[7620]: I0318 08:49:12.008689 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-serving-ca\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.008827 master-0 kubenswrapper[7620]: I0318 08:49:12.008811 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-client\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.008981 master-0 kubenswrapper[7620]: I0318 08:49:12.008963 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-serving-cert\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.009097 master-0 kubenswrapper[7620]: I0318 08:49:12.009081 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-node-pullsecrets\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.009206 master-0 kubenswrapper[7620]: I0318 08:49:12.009191 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-audit-dir\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.009331 master-0 kubenswrapper[7620]: I0318 08:49:12.009314 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-image-import-ca\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.009487 master-0 kubenswrapper[7620]: I0318 08:49:12.009469 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-trusted-ca-bundle\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.009593 master-0 kubenswrapper[7620]: I0318 08:49:12.009577 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.009701 master-0 kubenswrapper[7620]: I0318 08:49:12.009684 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/616f5762-d98a-4d54-9390-5201a2c94ba2-kube-api-access-bmtf2\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.009880 master-0 kubenswrapper[7620]: I0318 08:49:12.009724 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-config\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.010364 master-0 kubenswrapper[7620]: I0318 08:49:12.010324 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-image-import-ca\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.010448 master-0 kubenswrapper[7620]: I0318 08:49:12.010423 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-node-pullsecrets\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.010503 master-0 kubenswrapper[7620]: I0318 08:49:12.010479 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-audit-dir\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.010568 master-0 kubenswrapper[7620]: E0318 08:49:12.010545 7620 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 08:49:12.010622 master-0 kubenswrapper[7620]: E0318 08:49:12.010610 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit podName:616f5762-d98a-4d54-9390-5201a2c94ba2 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:12.51058983 +0000 UTC m=+16.505371602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit") pod "apiserver-6b9bdc7688-sdd9g" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2") : configmap "audit-0" not found Mar 18 08:49:12.011484 master-0 kubenswrapper[7620]: I0318 08:49:12.011463 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-trusted-ca-bundle\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.011898 master-0 kubenswrapper[7620]: I0318 08:49:12.011835 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-serving-ca\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.020605 master-0 kubenswrapper[7620]: I0318 08:49:12.015693 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-encryption-config\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.026940 master-0 kubenswrapper[7620]: I0318 08:49:12.026716 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-serving-cert\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.030739 master-0 kubenswrapper[7620]: I0318 08:49:12.029431 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-client\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.047057 master-0 kubenswrapper[7620]: I0318 08:49:12.047002 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/616f5762-d98a-4d54-9390-5201a2c94ba2-kube-api-access-bmtf2\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.517377 master-0 kubenswrapper[7620]: I0318 08:49:12.517261 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:12.517843 master-0 kubenswrapper[7620]: E0318 08:49:12.517460 7620 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 08:49:12.517843 master-0 kubenswrapper[7620]: E0318 08:49:12.517554 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit podName:616f5762-d98a-4d54-9390-5201a2c94ba2 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:13.517531317 +0000 UTC m=+17.512313079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit") pod "apiserver-6b9bdc7688-sdd9g" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2") : configmap "audit-0" not found Mar 18 08:49:13.025416 master-0 kubenswrapper[7620]: I0318 08:49:13.024693 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:49:13.025416 master-0 kubenswrapper[7620]: I0318 08:49:13.025409 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.025487 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.025181 7620 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.025576 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.025658 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs podName:d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.025623319 +0000 UTC m=+33.020405261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs") pod "network-metrics-daemon-6x85n" (UID: "d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29") : secret "metrics-daemon-secret" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.025749 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.025781 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.025822 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.025913 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.025826 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.025948 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert podName:59d50dd5-6793-4f96-a769-31e086ecc7e4 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.025936518 +0000 UTC m=+33.020718280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-q8ff6" (UID: "59d50dd5-6793-4f96-a769-31e086ecc7e4") : secret "package-server-manager-serving-cert" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026052 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert podName:3d9fe248-ba87-47e3-911a-1b2b112b5683 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.026013961 +0000 UTC m=+33.020795983 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert") pod "olm-operator-5c9796789-sl5kr" (UID: "3d9fe248-ba87-47e3-911a-1b2b112b5683") : secret "olm-operator-serving-cert" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.026091 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.026340 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.026398 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: I0318 08:49:13.026458 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.025538 7620 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026556 7620 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026592 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls podName:e025d334-20e7-491f-8027-194251398747 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.026581488 +0000 UTC m=+33.021363250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls") pod "dns-operator-9c5679d8f-b9pn7" (UID: "e025d334-20e7-491f-8027-194251398747") : secret "metrics-tls" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026614 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics podName:34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.026602008 +0000 UTC m=+33.021383770 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-bcwsv" (UID: "34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe") : secret "marketplace-operator-metrics" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026709 7620 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026727 7620 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026779 7620 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026734 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls podName:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.026726672 +0000 UTC m=+33.021508444 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls") pod "ingress-operator-66b84d69b-7h94d" (UID: "94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9") : secret "metrics-tls" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026825 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs podName:159a26f5-3cfc-4db2-88e9-bff5d8a613fc nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.026814985 +0000 UTC m=+33.021596747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-2cf64" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc") : secret "multus-admission-controller-secret" not found Mar 18 08:49:13.026798 master-0 kubenswrapper[7620]: E0318 08:49:13.026839 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls podName:e7b72267-fc08-41ed-a92b-9fca7372aba6 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.026832195 +0000 UTC m=+33.021613957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-nc7hf" (UID: "e7b72267-fc08-41ed-a92b-9fca7372aba6") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:49:13.029103 master-0 kubenswrapper[7620]: E0318 08:49:13.027065 7620 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:49:13.029103 master-0 kubenswrapper[7620]: E0318 08:49:13.027099 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls podName:7962fb40-1170-4c00-b1bf-92966aeae807 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.027089933 +0000 UTC m=+33.021871705 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-vxsth" (UID: "7962fb40-1170-4c00-b1bf-92966aeae807") : secret "image-registry-operator-tls" not found Mar 18 08:49:13.029103 master-0 kubenswrapper[7620]: I0318 08:49:13.027998 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:49:13.029103 master-0 kubenswrapper[7620]: I0318 08:49:13.028082 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:13.030679 master-0 kubenswrapper[7620]: E0318 08:49:13.030638 7620 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:49:13.030965 master-0 kubenswrapper[7620]: E0318 08:49:13.030935 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert podName:b065df33-7911-456e-b3a2-1f8c8d53e053 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:29.030906407 +0000 UTC m=+33.025688289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert") pod "catalog-operator-68f85b4d6c-swdsh" (UID: "b065df33-7911-456e-b3a2-1f8c8d53e053") : secret "catalog-operator-serving-cert" not found Mar 18 08:49:13.033950 master-0 kubenswrapper[7620]: I0318 08:49:13.033886 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:13.036126 master-0 kubenswrapper[7620]: I0318 08:49:13.035292 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"cluster-version-operator-56d8475767-2xjqg\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:49:13.036126 master-0 kubenswrapper[7620]: I0318 08:49:13.035586 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:13.070946 master-0 kubenswrapper[7620]: I0318 08:49:13.070835 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 08:49:13.080918 master-0 kubenswrapper[7620]: I0318 08:49:13.080774 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:49:13.319605 master-0 kubenswrapper[7620]: I0318 08:49:13.319313 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9"] Mar 18 08:49:13.432591 master-0 kubenswrapper[7620]: I0318 08:49:13.432470 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:13.433154 master-0 kubenswrapper[7620]: I0318 08:49:13.433096 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:13.433302 master-0 kubenswrapper[7620]: E0318 08:49:13.433257 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:13.433376 master-0 kubenswrapper[7620]: E0318 08:49:13.433349 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:21.433324677 +0000 UTC m=+25.428106439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : configmap "client-ca" not found Mar 18 08:49:13.440820 master-0 kubenswrapper[7620]: I0318 08:49:13.440777 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:13.534608 master-0 kubenswrapper[7620]: I0318 08:49:13.534508 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:13.534956 master-0 kubenswrapper[7620]: E0318 08:49:13.534888 7620 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 08:49:13.535077 master-0 kubenswrapper[7620]: E0318 08:49:13.535042 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit podName:616f5762-d98a-4d54-9390-5201a2c94ba2 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:15.535010289 +0000 UTC m=+19.529792081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit") pod "apiserver-6b9bdc7688-sdd9g" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2") : configmap "audit-0" not found Mar 18 08:49:13.555066 master-0 kubenswrapper[7620]: I0318 08:49:13.554935 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" event={"ID":"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a","Type":"ContainerStarted","Data":"2f2e86c1c0e64c2e65cdc84455f83de896f426c03295ce65094d278bb54d2594"} Mar 18 08:49:13.556784 master-0 kubenswrapper[7620]: I0318 08:49:13.556722 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" event={"ID":"3d0b7f60-c32e-48a6-b9e9-87c8f018367d","Type":"ContainerStarted","Data":"ac096d70d81e7801442d61c8ffa707b3be42916eaae60f62fcab780efe8be51f"} Mar 18 08:49:15.573000 master-0 kubenswrapper[7620]: I0318 08:49:15.572589 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit\") pod \"apiserver-6b9bdc7688-sdd9g\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:15.574446 master-0 kubenswrapper[7620]: E0318 08:49:15.572769 7620 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 08:49:15.574446 master-0 kubenswrapper[7620]: E0318 08:49:15.573163 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit podName:616f5762-d98a-4d54-9390-5201a2c94ba2 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:19.573140198 +0000 UTC m=+23.567921960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit") pod "apiserver-6b9bdc7688-sdd9g" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2") : configmap "audit-0" not found Mar 18 08:49:15.637641 master-0 kubenswrapper[7620]: I0318 08:49:15.637569 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6b9bdc7688-sdd9g"] Mar 18 08:49:15.638116 master-0 kubenswrapper[7620]: E0318 08:49:15.638069 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" podUID="616f5762-d98a-4d54-9390-5201a2c94ba2" Mar 18 08:49:16.603343 master-0 kubenswrapper[7620]: I0318 08:49:16.602781 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:16.609599 master-0 kubenswrapper[7620]: I0318 08:49:16.609516 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:16.726980 master-0 kubenswrapper[7620]: I0318 08:49:16.726903 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-config\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.726980 master-0 kubenswrapper[7620]: I0318 08:49:16.726958 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-trusted-ca-bundle\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.726980 master-0 kubenswrapper[7620]: I0318 08:49:16.726978 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-node-pullsecrets\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.726980 master-0 kubenswrapper[7620]: I0318 08:49:16.727006 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-encryption-config\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.727386 master-0 kubenswrapper[7620]: I0318 08:49:16.727023 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-client\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.727386 master-0 kubenswrapper[7620]: I0318 08:49:16.727128 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-serving-ca\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.727386 master-0 kubenswrapper[7620]: I0318 08:49:16.727168 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-audit-dir\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.727386 master-0 kubenswrapper[7620]: I0318 08:49:16.727191 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-serving-cert\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.727386 master-0 kubenswrapper[7620]: I0318 08:49:16.727222 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-image-import-ca\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.727386 master-0 kubenswrapper[7620]: I0318 08:49:16.727245 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/616f5762-d98a-4d54-9390-5201a2c94ba2-kube-api-access-bmtf2\") pod \"616f5762-d98a-4d54-9390-5201a2c94ba2\" (UID: \"616f5762-d98a-4d54-9390-5201a2c94ba2\") " Mar 18 08:49:16.729012 master-0 kubenswrapper[7620]: I0318 08:49:16.728964 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:16.729072 master-0 kubenswrapper[7620]: I0318 08:49:16.729029 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:16.729683 master-0 kubenswrapper[7620]: I0318 08:49:16.729625 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:16.729683 master-0 kubenswrapper[7620]: I0318 08:49:16.729674 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:16.729824 master-0 kubenswrapper[7620]: I0318 08:49:16.729753 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-config" (OuterVolumeSpecName: "config") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:16.729883 master-0 kubenswrapper[7620]: I0318 08:49:16.729838 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:16.741889 master-0 kubenswrapper[7620]: I0318 08:49:16.741711 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616f5762-d98a-4d54-9390-5201a2c94ba2-kube-api-access-bmtf2" (OuterVolumeSpecName: "kube-api-access-bmtf2") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "kube-api-access-bmtf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:16.742040 master-0 kubenswrapper[7620]: I0318 08:49:16.741977 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:16.742197 master-0 kubenswrapper[7620]: I0318 08:49:16.742121 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:16.743060 master-0 kubenswrapper[7620]: I0318 08:49:16.742462 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "616f5762-d98a-4d54-9390-5201a2c94ba2" (UID: "616f5762-d98a-4d54-9390-5201a2c94ba2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829124 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmtf2\" (UniqueName: \"kubernetes.io/projected/616f5762-d98a-4d54-9390-5201a2c94ba2-kube-api-access-bmtf2\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829198 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829217 7620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829235 7620 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829254 7620 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829272 7620 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829289 7620 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829308 7620 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/616f5762-d98a-4d54-9390-5201a2c94ba2-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829325 7620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/616f5762-d98a-4d54-9390-5201a2c94ba2-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.829458 master-0 kubenswrapper[7620]: I0318 08:49:16.829343 7620 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:17.768619 master-0 kubenswrapper[7620]: I0318 08:49:17.608827 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9mkgd" event={"ID":"866c259c-7661-4a80-873b-6fd625218665","Type":"ContainerStarted","Data":"bc63f5fb6239e758834d1d9ebca8496b41a59bcf219ee1ebc76fce1c1358a9c7"} Mar 18 08:49:17.768619 master-0 kubenswrapper[7620]: I0318 08:49:17.611198 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6b9bdc7688-sdd9g" Mar 18 08:49:17.768619 master-0 kubenswrapper[7620]: I0318 08:49:17.611255 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" event={"ID":"3d0b7f60-c32e-48a6-b9e9-87c8f018367d","Type":"ContainerStarted","Data":"15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8"} Mar 18 08:49:17.884885 master-0 kubenswrapper[7620]: I0318 08:49:17.881077 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7bb69b5c5c-djsr9"] Mar 18 08:49:17.884885 master-0 kubenswrapper[7620]: I0318 08:49:17.883920 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.889445 master-0 kubenswrapper[7620]: I0318 08:49:17.889403 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 08:49:17.889783 master-0 kubenswrapper[7620]: I0318 08:49:17.889485 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 08:49:17.889783 master-0 kubenswrapper[7620]: I0318 08:49:17.889503 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 08:49:17.889928 master-0 kubenswrapper[7620]: I0318 08:49:17.889908 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 08:49:17.893884 master-0 kubenswrapper[7620]: I0318 08:49:17.893812 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 08:49:17.898148 master-0 kubenswrapper[7620]: I0318 08:49:17.898105 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 08:49:17.901756 master-0 kubenswrapper[7620]: I0318 08:49:17.901521 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 08:49:17.902139 master-0 kubenswrapper[7620]: I0318 08:49:17.902116 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 08:49:17.902284 master-0 kubenswrapper[7620]: I0318 08:49:17.902263 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 08:49:17.903764 master-0 kubenswrapper[7620]: I0318 08:49:17.903736 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6b9bdc7688-sdd9g"] Mar 18 08:49:17.905930 master-0 kubenswrapper[7620]: I0318 08:49:17.905904 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-6b9bdc7688-sdd9g"] Mar 18 08:49:17.906119 master-0 kubenswrapper[7620]: I0318 08:49:17.906087 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 08:49:17.906231 master-0 kubenswrapper[7620]: I0318 08:49:17.906188 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7bb69b5c5c-djsr9"] Mar 18 08:49:17.953640 master-0 kubenswrapper[7620]: I0318 08:49:17.953556 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-serving-cert\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953640 master-0 kubenswrapper[7620]: I0318 08:49:17.953612 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-trusted-ca-bundle\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953659 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-serving-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953674 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-encryption-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953750 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpj79\" (UniqueName: \"kubernetes.io/projected/b5f9f50b-e7b4-4b81-864b-349303f21447-kube-api-access-bpj79\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953765 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-audit\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953787 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-audit-dir\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953804 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-node-pullsecrets\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953827 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953894 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-image-import-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.953934 master-0 kubenswrapper[7620]: I0318 08:49:17.953912 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-client\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:17.954206 master-0 kubenswrapper[7620]: I0318 08:49:17.953943 7620 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/616f5762-d98a-4d54-9390-5201a2c94ba2-audit\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.054714 master-0 kubenswrapper[7620]: I0318 08:49:18.054542 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpj79\" (UniqueName: \"kubernetes.io/projected/b5f9f50b-e7b4-4b81-864b-349303f21447-kube-api-access-bpj79\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.054714 master-0 kubenswrapper[7620]: I0318 08:49:18.054594 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-audit\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.054714 master-0 kubenswrapper[7620]: I0318 08:49:18.054656 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-audit-dir\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.054714 master-0 kubenswrapper[7620]: I0318 08:49:18.054677 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-node-pullsecrets\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.054714 master-0 kubenswrapper[7620]: I0318 08:49:18.054698 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.055036 master-0 kubenswrapper[7620]: I0318 08:49:18.054769 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-audit-dir\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.055036 master-0 kubenswrapper[7620]: I0318 08:49:18.054939 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-image-import-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.055036 master-0 kubenswrapper[7620]: I0318 08:49:18.054965 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-client\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.055036 master-0 kubenswrapper[7620]: I0318 08:49:18.054985 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-serving-cert\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.055036 master-0 kubenswrapper[7620]: I0318 08:49:18.055001 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-trusted-ca-bundle\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.055036 master-0 kubenswrapper[7620]: I0318 08:49:18.055034 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-serving-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.055218 master-0 kubenswrapper[7620]: I0318 08:49:18.055054 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-encryption-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.055625 master-0 kubenswrapper[7620]: I0318 08:49:18.055292 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-audit\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.057363 master-0 kubenswrapper[7620]: I0318 08:49:18.056091 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.057363 master-0 kubenswrapper[7620]: I0318 08:49:18.056585 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-serving-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.057363 master-0 kubenswrapper[7620]: I0318 08:49:18.057002 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-node-pullsecrets\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.057363 master-0 kubenswrapper[7620]: I0318 08:49:18.057030 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-image-import-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.057363 master-0 kubenswrapper[7620]: I0318 08:49:18.057288 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-trusted-ca-bundle\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.067407 master-0 kubenswrapper[7620]: I0318 08:49:18.067374 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-serving-cert\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.067487 master-0 kubenswrapper[7620]: I0318 08:49:18.067436 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-encryption-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.067613 master-0 kubenswrapper[7620]: I0318 08:49:18.067570 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-client\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.073687 master-0 kubenswrapper[7620]: I0318 08:49:18.073662 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpj79\" (UniqueName: \"kubernetes.io/projected/b5f9f50b-e7b4-4b81-864b-349303f21447-kube-api-access-bpj79\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.168045 master-0 kubenswrapper[7620]: I0318 08:49:18.167960 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:18.168045 master-0 kubenswrapper[7620]: I0318 08:49:18.168032 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca\") pod \"route-controller-manager-78cdbfbbdd-j26k9\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:18.168311 master-0 kubenswrapper[7620]: E0318 08:49:18.168181 7620 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:18.168311 master-0 kubenswrapper[7620]: E0318 08:49:18.168230 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:34.168214124 +0000 UTC m=+38.162995876 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : configmap "client-ca" not found Mar 18 08:49:18.168570 master-0 kubenswrapper[7620]: E0318 08:49:18.168542 7620 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:49:18.168612 master-0 kubenswrapper[7620]: E0318 08:49:18.168573 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert podName:a491fb9d-b7c2-4086-8dd6-ba5a77dc446c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:34.168566775 +0000 UTC m=+38.163348527 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert") pod "route-controller-manager-78cdbfbbdd-j26k9" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c") : secret "serving-cert" not found Mar 18 08:49:18.215832 master-0 kubenswrapper[7620]: I0318 08:49:18.215769 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:18.232315 master-0 kubenswrapper[7620]: I0318 08:49:18.232279 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616f5762-d98a-4d54-9390-5201a2c94ba2" path="/var/lib/kubelet/pods/616f5762-d98a-4d54-9390-5201a2c94ba2/volumes" Mar 18 08:49:18.414837 master-0 kubenswrapper[7620]: I0318 08:49:18.414786 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7bb69b5c5c-djsr9"] Mar 18 08:49:18.455520 master-0 kubenswrapper[7620]: I0318 08:49:18.455464 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:49:18.456452 master-0 kubenswrapper[7620]: I0318 08:49:18.456422 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.462584 master-0 kubenswrapper[7620]: I0318 08:49:18.459521 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 08:49:18.465391 master-0 kubenswrapper[7620]: I0318 08:49:18.465335 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:49:18.572422 master-0 kubenswrapper[7620]: I0318 08:49:18.572298 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.572422 master-0 kubenswrapper[7620]: I0318 08:49:18.572358 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-var-lock\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.572655 master-0 kubenswrapper[7620]: I0318 08:49:18.572453 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.674885 master-0 kubenswrapper[7620]: I0318 08:49:18.674274 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.675126 master-0 kubenswrapper[7620]: I0318 08:49:18.674894 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.675126 master-0 kubenswrapper[7620]: I0318 08:49:18.675062 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.675316 master-0 kubenswrapper[7620]: I0318 08:49:18.675122 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-var-lock\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.675316 master-0 kubenswrapper[7620]: I0318 08:49:18.675188 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-var-lock\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.696955 master-0 kubenswrapper[7620]: I0318 08:49:18.695578 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:18.787539 master-0 kubenswrapper[7620]: I0318 08:49:18.787356 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:21.452189 master-0 kubenswrapper[7620]: I0318 08:49:21.450580 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca\") pod \"controller-manager-74ff5587d8-4g47k\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:21.452189 master-0 kubenswrapper[7620]: E0318 08:49:21.450935 7620 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:49:21.452189 master-0 kubenswrapper[7620]: E0318 08:49:21.451010 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca podName:cb9b74f8-6ea7-40cd-8b69-342972ab8889 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:37.450985819 +0000 UTC m=+41.445767581 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca") pod "controller-manager-74ff5587d8-4g47k" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889") : configmap "client-ca" not found Mar 18 08:49:21.632304 master-0 kubenswrapper[7620]: I0318 08:49:21.632012 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" event={"ID":"b5f9f50b-e7b4-4b81-864b-349303f21447","Type":"ContainerStarted","Data":"7b6fb81fa9b3775db2a9d43b8034ee4a9a2939e8e74ced3195abe4a7116a137d"} Mar 18 08:49:22.793594 master-0 kubenswrapper[7620]: I0318 08:49:22.793078 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 08:49:22.794726 master-0 kubenswrapper[7620]: I0318 08:49:22.794667 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:22.808959 master-0 kubenswrapper[7620]: I0318 08:49:22.808880 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 08:49:22.873106 master-0 kubenswrapper[7620]: I0318 08:49:22.873038 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:22.873335 master-0 kubenswrapper[7620]: I0318 08:49:22.873124 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-var-lock\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:22.873335 master-0 kubenswrapper[7620]: I0318 08:49:22.873168 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:22.974041 master-0 kubenswrapper[7620]: I0318 08:49:22.973918 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:22.974041 master-0 kubenswrapper[7620]: I0318 08:49:22.974046 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-var-lock\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:22.974433 master-0 kubenswrapper[7620]: I0318 08:49:22.974093 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:22.974692 master-0 kubenswrapper[7620]: I0318 08:49:22.974647 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:22.974773 master-0 kubenswrapper[7620]: I0318 08:49:22.974702 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-var-lock\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:23.254936 master-0 kubenswrapper[7620]: I0318 08:49:23.254803 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 08:49:23.257118 master-0 kubenswrapper[7620]: I0318 08:49:23.257033 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:49:23.943741 master-0 kubenswrapper[7620]: I0318 08:49:23.943699 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:24.050183 master-0 kubenswrapper[7620]: I0318 08:49:24.050073 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:24.674199 master-0 kubenswrapper[7620]: I0318 08:49:24.674096 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"e9a3f4dd-913d-4707-84c5-d64ead736f0f","Type":"ContainerStarted","Data":"8db5165e7230354d49e216b22d1bddbbd6c0d777cfe8d00574e23d3656b914f1"} Mar 18 08:49:25.020934 master-0 kubenswrapper[7620]: I0318 08:49:25.019314 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8"] Mar 18 08:49:25.020934 master-0 kubenswrapper[7620]: I0318 08:49:25.020476 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.032001 master-0 kubenswrapper[7620]: I0318 08:49:25.031815 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 08:49:25.032001 master-0 kubenswrapper[7620]: I0318 08:49:25.031928 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 08:49:25.032781 master-0 kubenswrapper[7620]: I0318 08:49:25.032740 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8"] Mar 18 08:49:25.034068 master-0 kubenswrapper[7620]: I0318 08:49:25.034029 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 08:49:25.050786 master-0 kubenswrapper[7620]: I0318 08:49:25.050101 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 08:49:25.116935 master-0 kubenswrapper[7620]: I0318 08:49:25.091261 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr"] Mar 18 08:49:25.116935 master-0 kubenswrapper[7620]: I0318 08:49:25.097925 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.116935 master-0 kubenswrapper[7620]: I0318 08:49:25.102537 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.116935 master-0 kubenswrapper[7620]: I0318 08:49:25.102584 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43fbd379-dd1e-4287-bd76-fd3ec51cde43-cache\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.116935 master-0 kubenswrapper[7620]: I0318 08:49:25.102716 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.116935 master-0 kubenswrapper[7620]: I0318 08:49:25.102789 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52pj\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-kube-api-access-c52pj\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.116935 master-0 kubenswrapper[7620]: I0318 08:49:25.102900 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.116935 master-0 kubenswrapper[7620]: I0318 08:49:25.102957 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.146901 master-0 kubenswrapper[7620]: I0318 08:49:25.138614 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 08:49:25.146901 master-0 kubenswrapper[7620]: I0318 08:49:25.139052 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 08:49:25.167876 master-0 kubenswrapper[7620]: I0318 08:49:25.162993 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr"] Mar 18 08:49:25.189312 master-0 kubenswrapper[7620]: I0318 08:49:25.181014 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.211978 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43fbd379-dd1e-4287-bd76-fd3ec51cde43-cache\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212273 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212302 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212335 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbsgx\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-kube-api-access-fbsgx\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212391 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212438 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212463 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52pj\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-kube-api-access-c52pj\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212495 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212521 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212541 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-cache\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212566 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212634 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.212918 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43fbd379-dd1e-4287-bd76-fd3ec51cde43-cache\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: E0318 08:49:25.213039 7620 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: E0318 08:49:25.213072 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs podName:43fbd379-dd1e-4287-bd76-fd3ec51cde43 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.713059327 +0000 UTC m=+29.707841079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs") pod "catalogd-controller-manager-6864dc98f7-phjp8" (UID: "43fbd379-dd1e-4287-bd76-fd3ec51cde43") : secret "catalogserver-cert" not found Mar 18 08:49:25.215895 master-0 kubenswrapper[7620]: I0318 08:49:25.213298 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.236877 master-0 kubenswrapper[7620]: I0318 08:49:25.230995 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8"] Mar 18 08:49:25.236877 master-0 kubenswrapper[7620]: I0318 08:49:25.231743 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.236877 master-0 kubenswrapper[7620]: I0318 08:49:25.234506 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 08:49:25.236877 master-0 kubenswrapper[7620]: I0318 08:49:25.234677 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 08:49:25.236877 master-0 kubenswrapper[7620]: I0318 08:49:25.234912 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.237230 master-0 kubenswrapper[7620]: I0318 08:49:25.236989 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 08:49:25.237230 master-0 kubenswrapper[7620]: I0318 08:49:25.237211 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 08:49:25.237412 master-0 kubenswrapper[7620]: I0318 08:49:25.237386 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 08:49:25.237549 master-0 kubenswrapper[7620]: I0318 08:49:25.237528 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 08:49:25.237669 master-0 kubenswrapper[7620]: I0318 08:49:25.237647 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 08:49:25.238144 master-0 kubenswrapper[7620]: I0318 08:49:25.238120 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 08:49:25.269877 master-0 kubenswrapper[7620]: I0318 08:49:25.267809 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8"] Mar 18 08:49:25.282873 master-0 kubenswrapper[7620]: I0318 08:49:25.273239 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52pj\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-kube-api-access-c52pj\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.292900 master-0 kubenswrapper[7620]: I0318 08:49:25.288216 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 08:49:25.320534 master-0 kubenswrapper[7620]: I0318 08:49:25.320484 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqldd\" (UniqueName: \"kubernetes.io/projected/2700f537-8f31-4380-a527-3e697a8122cc-kube-api-access-dqldd\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.320667 master-0 kubenswrapper[7620]: I0318 08:49:25.320558 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.320667 master-0 kubenswrapper[7620]: I0318 08:49:25.320601 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbsgx\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-kube-api-access-fbsgx\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.320667 master-0 kubenswrapper[7620]: I0318 08:49:25.320632 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2700f537-8f31-4380-a527-3e697a8122cc-audit-dir\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.320667 master-0 kubenswrapper[7620]: I0318 08:49:25.320665 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-trusted-ca-bundle\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.320885 master-0 kubenswrapper[7620]: I0318 08:49:25.320732 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.320885 master-0 kubenswrapper[7620]: I0318 08:49:25.320761 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.320885 master-0 kubenswrapper[7620]: I0318 08:49:25.320806 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-etcd-client\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.320885 master-0 kubenswrapper[7620]: I0318 08:49:25.320840 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-cache\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.320885 master-0 kubenswrapper[7620]: I0318 08:49:25.320877 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-encryption-config\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.321078 master-0 kubenswrapper[7620]: I0318 08:49:25.320897 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-etcd-serving-ca\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.321078 master-0 kubenswrapper[7620]: I0318 08:49:25.320923 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-audit-policies\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.321078 master-0 kubenswrapper[7620]: I0318 08:49:25.320943 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.338897 master-0 kubenswrapper[7620]: I0318 08:49:25.338841 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.341198 master-0 kubenswrapper[7620]: I0318 08:49:25.341168 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.350346 master-0 kubenswrapper[7620]: I0318 08:49:25.350317 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-cache\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.350470 master-0 kubenswrapper[7620]: I0318 08:49:25.350387 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.387955 master-0 kubenswrapper[7620]: I0318 08:49:25.380596 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74ff5587d8-4g47k"] Mar 18 08:49:25.387955 master-0 kubenswrapper[7620]: E0318 08:49:25.380848 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" podUID="cb9b74f8-6ea7-40cd-8b69-342972ab8889" Mar 18 08:49:25.419026 master-0 kubenswrapper[7620]: I0318 08:49:25.416391 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbsgx\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-kube-api-access-fbsgx\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.422601 master-0 kubenswrapper[7620]: I0318 08:49:25.422454 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2700f537-8f31-4380-a527-3e697a8122cc-audit-dir\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.422601 master-0 kubenswrapper[7620]: I0318 08:49:25.422575 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2700f537-8f31-4380-a527-3e697a8122cc-audit-dir\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.422724 master-0 kubenswrapper[7620]: I0318 08:49:25.422704 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-trusted-ca-bundle\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.422860 master-0 kubenswrapper[7620]: I0318 08:49:25.422788 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.423264 master-0 kubenswrapper[7620]: E0318 08:49:25.423242 7620 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 18 08:49:25.423324 master-0 kubenswrapper[7620]: E0318 08:49:25.423298 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert podName:2700f537-8f31-4380-a527-3e697a8122cc nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.923283086 +0000 UTC m=+29.918064838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert") pod "apiserver-556c8fbcff-5shs8" (UID: "2700f537-8f31-4380-a527-3e697a8122cc") : secret "serving-cert" not found Mar 18 08:49:25.423603 master-0 kubenswrapper[7620]: I0318 08:49:25.423579 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-etcd-client\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.423667 master-0 kubenswrapper[7620]: I0318 08:49:25.423612 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-encryption-config\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.423667 master-0 kubenswrapper[7620]: I0318 08:49:25.423632 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-etcd-serving-ca\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.423667 master-0 kubenswrapper[7620]: I0318 08:49:25.423651 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-audit-policies\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.423749 master-0 kubenswrapper[7620]: I0318 08:49:25.423670 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqldd\" (UniqueName: \"kubernetes.io/projected/2700f537-8f31-4380-a527-3e697a8122cc-kube-api-access-dqldd\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.424165 master-0 kubenswrapper[7620]: I0318 08:49:25.424140 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-trusted-ca-bundle\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.424981 master-0 kubenswrapper[7620]: I0318 08:49:25.424969 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-etcd-serving-ca\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.426194 master-0 kubenswrapper[7620]: I0318 08:49:25.426170 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-audit-policies\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.429704 master-0 kubenswrapper[7620]: I0318 08:49:25.429659 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-encryption-config\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.435798 master-0 kubenswrapper[7620]: I0318 08:49:25.435764 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-etcd-client\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.466921 master-0 kubenswrapper[7620]: I0318 08:49:25.463640 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9"] Mar 18 08:49:25.466921 master-0 kubenswrapper[7620]: E0318 08:49:25.464102 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" podUID="a491fb9d-b7c2-4086-8dd6-ba5a77dc446c" Mar 18 08:49:25.497529 master-0 kubenswrapper[7620]: I0318 08:49:25.497473 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqldd\" (UniqueName: \"kubernetes.io/projected/2700f537-8f31-4380-a527-3e697a8122cc-kube-api-access-dqldd\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.500678 master-0 kubenswrapper[7620]: I0318 08:49:25.500246 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:25.644700 master-0 kubenswrapper[7620]: I0318 08:49:25.644317 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-zzqc6"] Mar 18 08:49:25.645279 master-0 kubenswrapper[7620]: I0318 08:49:25.645260 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.684573 master-0 kubenswrapper[7620]: I0318 08:49:25.684095 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" event={"ID":"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a","Type":"ContainerStarted","Data":"1386a79cb00543c49a1948cd40fdfe98de1aaaca6d85668494cc7088d84ed830"} Mar 18 08:49:25.689583 master-0 kubenswrapper[7620]: I0318 08:49:25.689539 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"e9a3f4dd-913d-4707-84c5-d64ead736f0f","Type":"ContainerStarted","Data":"5e0c3ea7554f76fe478ba87238a8f52a7e84e0ca4323bf58986273a5880e93c2"} Mar 18 08:49:25.701214 master-0 kubenswrapper[7620]: I0318 08:49:25.701156 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"1ecff6b2-dbd4-4366-873b-2170d0b76c0f","Type":"ContainerStarted","Data":"cff5a62c6fe250b627c150b3ba60d6fe2a04d4b96c22543f1ae21c885d156295"} Mar 18 08:49:25.701214 master-0 kubenswrapper[7620]: I0318 08:49:25.701200 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:25.701340 master-0 kubenswrapper[7620]: I0318 08:49:25.701197 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:25.720562 master-0 kubenswrapper[7620]: I0318 08:49:25.720521 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:25.725438 master-0 kubenswrapper[7620]: I0318 08:49:25.724609 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728206 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-tmp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728262 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bzxp\" (UniqueName: \"kubernetes.io/projected/f826efe0-60a1-4465-b8d0-d4069ed507a1-kube-api-access-6bzxp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728308 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728435 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728461 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-sys\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728495 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-conf\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728514 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysconfig\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728545 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-tuned\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728587 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-kubernetes\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728606 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-systemd\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728623 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-run\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728671 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-lib-modules\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728697 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-var-lib-kubelet\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728717 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-host\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: I0318 08:49:25.728743 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-modprobe-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: E0318 08:49:25.728945 7620 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 18 08:49:25.729059 master-0 kubenswrapper[7620]: E0318 08:49:25.728991 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs podName:43fbd379-dd1e-4287-bd76-fd3ec51cde43 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:26.728975882 +0000 UTC m=+30.723757634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs") pod "catalogd-controller-manager-6864dc98f7-phjp8" (UID: "43fbd379-dd1e-4287-bd76-fd3ec51cde43") : secret "catalogserver-cert" not found Mar 18 08:49:25.777235 master-0 kubenswrapper[7620]: I0318 08:49:25.777149 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=7.777127578 podStartE2EDuration="7.777127578s" podCreationTimestamp="2026-03-18 08:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:25.754952596 +0000 UTC m=+29.749734348" watchObservedRunningTime="2026-03-18 08:49:25.777127578 +0000 UTC m=+29.771909330" Mar 18 08:49:25.778328 master-0 kubenswrapper[7620]: I0318 08:49:25.778296 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr"] Mar 18 08:49:25.789612 master-0 kubenswrapper[7620]: W0318 08:49:25.789561 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33a5c021_23c3_4a97_b5f3_77fd6dcba1ab.slice/crio-01fc205ca60889e86b938272f49efc7613d39ee0f345e6249d36f7dbe33a148e WatchSource:0}: Error finding container 01fc205ca60889e86b938272f49efc7613d39ee0f345e6249d36f7dbe33a148e: Status 404 returned error can't find the container with id 01fc205ca60889e86b938272f49efc7613d39ee0f345e6249d36f7dbe33a148e Mar 18 08:49:25.829414 master-0 kubenswrapper[7620]: I0318 08:49:25.829378 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-config\") pod \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " Mar 18 08:49:25.829569 master-0 kubenswrapper[7620]: I0318 08:49:25.829427 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b656\" (UniqueName: \"kubernetes.io/projected/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-kube-api-access-4b656\") pod \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\" (UID: \"a491fb9d-b7c2-4086-8dd6-ba5a77dc446c\") " Mar 18 08:49:25.829569 master-0 kubenswrapper[7620]: I0318 08:49:25.829459 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") pod \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " Mar 18 08:49:25.829569 master-0 kubenswrapper[7620]: I0318 08:49:25.829479 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-config\") pod \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " Mar 18 08:49:25.829569 master-0 kubenswrapper[7620]: I0318 08:49:25.829498 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-proxy-ca-bundles\") pod \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " Mar 18 08:49:25.829715 master-0 kubenswrapper[7620]: I0318 08:49:25.829580 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xtrw\" (UniqueName: \"kubernetes.io/projected/cb9b74f8-6ea7-40cd-8b69-342972ab8889-kube-api-access-4xtrw\") pod \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\" (UID: \"cb9b74f8-6ea7-40cd-8b69-342972ab8889\") " Mar 18 08:49:25.829715 master-0 kubenswrapper[7620]: I0318 08:49:25.829685 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysconfig\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.829772 master-0 kubenswrapper[7620]: I0318 08:49:25.829724 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-tuned\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.829772 master-0 kubenswrapper[7620]: I0318 08:49:25.829763 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-kubernetes\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.829828 master-0 kubenswrapper[7620]: I0318 08:49:25.829784 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-systemd\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.829828 master-0 kubenswrapper[7620]: I0318 08:49:25.829803 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-run\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.829909 master-0 kubenswrapper[7620]: I0318 08:49:25.829834 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-lib-modules\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.829909 master-0 kubenswrapper[7620]: I0318 08:49:25.829872 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-var-lib-kubelet\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.829909 master-0 kubenswrapper[7620]: I0318 08:49:25.829889 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-host\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.829909 master-0 kubenswrapper[7620]: I0318 08:49:25.829907 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-modprobe-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.830017 master-0 kubenswrapper[7620]: I0318 08:49:25.829947 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-tmp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.830017 master-0 kubenswrapper[7620]: I0318 08:49:25.829967 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzxp\" (UniqueName: \"kubernetes.io/projected/f826efe0-60a1-4465-b8d0-d4069ed507a1-kube-api-access-6bzxp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.830103 master-0 kubenswrapper[7620]: I0318 08:49:25.830053 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.830139 master-0 kubenswrapper[7620]: I0318 08:49:25.830110 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-sys\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.830168 master-0 kubenswrapper[7620]: I0318 08:49:25.830137 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-conf\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.830368 master-0 kubenswrapper[7620]: I0318 08:49:25.830338 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-conf\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.830692 master-0 kubenswrapper[7620]: I0318 08:49:25.830667 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-systemd\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832132 master-0 kubenswrapper[7620]: I0318 08:49:25.831715 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-var-lib-kubelet\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832132 master-0 kubenswrapper[7620]: I0318 08:49:25.832027 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-config" (OuterVolumeSpecName: "config") pod "cb9b74f8-6ea7-40cd-8b69-342972ab8889" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:25.832132 master-0 kubenswrapper[7620]: I0318 08:49:25.832034 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-run\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832132 master-0 kubenswrapper[7620]: I0318 08:49:25.832060 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cb9b74f8-6ea7-40cd-8b69-342972ab8889" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:25.832291 master-0 kubenswrapper[7620]: I0318 08:49:25.832137 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-modprobe-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832291 master-0 kubenswrapper[7620]: I0318 08:49:25.832183 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-sys\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832291 master-0 kubenswrapper[7620]: I0318 08:49:25.832215 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-kubernetes\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832291 master-0 kubenswrapper[7620]: I0318 08:49:25.832213 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-config" (OuterVolumeSpecName: "config") pod "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:25.832443 master-0 kubenswrapper[7620]: I0318 08:49:25.832294 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832443 master-0 kubenswrapper[7620]: I0318 08:49:25.832408 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-lib-modules\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832615 master-0 kubenswrapper[7620]: I0318 08:49:25.832454 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-host\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.832615 master-0 kubenswrapper[7620]: I0318 08:49:25.832426 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysconfig\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.836261 master-0 kubenswrapper[7620]: I0318 08:49:25.836167 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb9b74f8-6ea7-40cd-8b69-342972ab8889" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:25.836568 master-0 kubenswrapper[7620]: I0318 08:49:25.836540 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-tmp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.837040 master-0 kubenswrapper[7620]: I0318 08:49:25.837009 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9b74f8-6ea7-40cd-8b69-342972ab8889-kube-api-access-4xtrw" (OuterVolumeSpecName: "kube-api-access-4xtrw") pod "cb9b74f8-6ea7-40cd-8b69-342972ab8889" (UID: "cb9b74f8-6ea7-40cd-8b69-342972ab8889"). InnerVolumeSpecName "kube-api-access-4xtrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:25.837313 master-0 kubenswrapper[7620]: I0318 08:49:25.837291 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-tuned\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.840023 master-0 kubenswrapper[7620]: I0318 08:49:25.839973 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-kube-api-access-4b656" (OuterVolumeSpecName: "kube-api-access-4b656") pod "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c" (UID: "a491fb9d-b7c2-4086-8dd6-ba5a77dc446c"). InnerVolumeSpecName "kube-api-access-4b656". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:25.848577 master-0 kubenswrapper[7620]: I0318 08:49:25.848534 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bzxp\" (UniqueName: \"kubernetes.io/projected/f826efe0-60a1-4465-b8d0-d4069ed507a1-kube-api-access-6bzxp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:25.931758 master-0 kubenswrapper[7620]: I0318 08:49:25.931525 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.932228 master-0 kubenswrapper[7620]: I0318 08:49:25.932175 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:25.932228 master-0 kubenswrapper[7620]: I0318 08:49:25.932200 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b656\" (UniqueName: \"kubernetes.io/projected/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-kube-api-access-4b656\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:25.932228 master-0 kubenswrapper[7620]: I0318 08:49:25.932224 7620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb9b74f8-6ea7-40cd-8b69-342972ab8889-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:25.932439 master-0 kubenswrapper[7620]: I0318 08:49:25.932236 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:25.932439 master-0 kubenswrapper[7620]: I0318 08:49:25.932247 7620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:25.932439 master-0 kubenswrapper[7620]: I0318 08:49:25.932258 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xtrw\" (UniqueName: \"kubernetes.io/projected/cb9b74f8-6ea7-40cd-8b69-342972ab8889-kube-api-access-4xtrw\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:25.935192 master-0 kubenswrapper[7620]: I0318 08:49:25.935072 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:25.971313 master-0 kubenswrapper[7620]: I0318 08:49:25.971204 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:26.019583 master-0 kubenswrapper[7620]: I0318 08:49:26.019535 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 08:49:26.710761 master-0 kubenswrapper[7620]: I0318 08:49:26.710219 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"1ecff6b2-dbd4-4366-873b-2170d0b76c0f","Type":"ContainerStarted","Data":"010b44e43896597007413d73633a4236214230adb7cc7835885b7a52a1e627ab"} Mar 18 08:49:26.717772 master-0 kubenswrapper[7620]: I0318 08:49:26.717427 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" event={"ID":"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab","Type":"ContainerStarted","Data":"90143bd188df252a12ebaece10ff43bd805ca65e0b3a851506a5ecef442477c4"} Mar 18 08:49:26.717772 master-0 kubenswrapper[7620]: I0318 08:49:26.717489 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" event={"ID":"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab","Type":"ContainerStarted","Data":"01fc205ca60889e86b938272f49efc7613d39ee0f345e6249d36f7dbe33a148e"} Mar 18 08:49:26.717772 master-0 kubenswrapper[7620]: I0318 08:49:26.717541 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74ff5587d8-4g47k" Mar 18 08:49:26.718183 master-0 kubenswrapper[7620]: I0318 08:49:26.718088 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9" Mar 18 08:49:26.729446 master-0 kubenswrapper[7620]: I0318 08:49:26.729372 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=4.729350714 podStartE2EDuration="4.729350714s" podCreationTimestamp="2026-03-18 08:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:26.728246661 +0000 UTC m=+30.723028413" watchObservedRunningTime="2026-03-18 08:49:26.729350714 +0000 UTC m=+30.724132466" Mar 18 08:49:26.744022 master-0 kubenswrapper[7620]: I0318 08:49:26.743976 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:26.752690 master-0 kubenswrapper[7620]: I0318 08:49:26.752660 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:26.772884 master-0 kubenswrapper[7620]: I0318 08:49:26.772535 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k"] Mar 18 08:49:26.773615 master-0 kubenswrapper[7620]: I0318 08:49:26.773587 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9"] Mar 18 08:49:26.773875 master-0 kubenswrapper[7620]: I0318 08:49:26.773839 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.773997 master-0 kubenswrapper[7620]: I0318 08:49:26.773958 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78cdbfbbdd-j26k9"] Mar 18 08:49:26.776525 master-0 kubenswrapper[7620]: I0318 08:49:26.776454 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k"] Mar 18 08:49:26.777340 master-0 kubenswrapper[7620]: I0318 08:49:26.777302 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:26.778624 master-0 kubenswrapper[7620]: I0318 08:49:26.778524 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 08:49:26.780271 master-0 kubenswrapper[7620]: I0318 08:49:26.780217 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:26.780714 master-0 kubenswrapper[7620]: I0318 08:49:26.780678 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 08:49:26.782947 master-0 kubenswrapper[7620]: I0318 08:49:26.782931 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 08:49:26.796704 master-0 kubenswrapper[7620]: I0318 08:49:26.796662 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74ff5587d8-4g47k"] Mar 18 08:49:26.808867 master-0 kubenswrapper[7620]: I0318 08:49:26.808811 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74ff5587d8-4g47k"] Mar 18 08:49:26.847065 master-0 kubenswrapper[7620]: I0318 08:49:26.846746 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-config\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.847209 master-0 kubenswrapper[7620]: I0318 08:49:26.847077 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c228d525-5f89-4e64-bfb4-d4e837adc914-serving-cert\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.847209 master-0 kubenswrapper[7620]: I0318 08:49:26.847132 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g44q\" (UniqueName: \"kubernetes.io/projected/c228d525-5f89-4e64-bfb4-d4e837adc914-kube-api-access-4g44q\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.847209 master-0 kubenswrapper[7620]: I0318 08:49:26.847189 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-client-ca\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.847315 master-0 kubenswrapper[7620]: I0318 08:49:26.847220 7620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:26.847315 master-0 kubenswrapper[7620]: I0318 08:49:26.847232 7620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb9b74f8-6ea7-40cd-8b69-342972ab8889-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:26.847315 master-0 kubenswrapper[7620]: I0318 08:49:26.847242 7620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:26.874811 master-0 kubenswrapper[7620]: I0318 08:49:26.874776 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:26.948718 master-0 kubenswrapper[7620]: I0318 08:49:26.948669 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c228d525-5f89-4e64-bfb4-d4e837adc914-serving-cert\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.949085 master-0 kubenswrapper[7620]: I0318 08:49:26.949070 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g44q\" (UniqueName: \"kubernetes.io/projected/c228d525-5f89-4e64-bfb4-d4e837adc914-kube-api-access-4g44q\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.949217 master-0 kubenswrapper[7620]: I0318 08:49:26.949204 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-client-ca\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.949326 master-0 kubenswrapper[7620]: I0318 08:49:26.949314 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-config\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.950436 master-0 kubenswrapper[7620]: I0318 08:49:26.950422 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-config\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.954501 master-0 kubenswrapper[7620]: I0318 08:49:26.952072 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-client-ca\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.958197 master-0 kubenswrapper[7620]: I0318 08:49:26.958154 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c228d525-5f89-4e64-bfb4-d4e837adc914-serving-cert\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:26.967518 master-0 kubenswrapper[7620]: I0318 08:49:26.967447 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g44q\" (UniqueName: \"kubernetes.io/projected/c228d525-5f89-4e64-bfb4-d4e837adc914-kube-api-access-4g44q\") pod \"route-controller-manager-d8d8dd479-7jj4k\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:27.108791 master-0 kubenswrapper[7620]: I0318 08:49:27.108576 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:27.646064 master-0 kubenswrapper[7620]: I0318 08:49:27.644889 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8"] Mar 18 08:49:27.653985 master-0 kubenswrapper[7620]: I0318 08:49:27.652214 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:49:27.721996 master-0 kubenswrapper[7620]: I0318 08:49:27.721642 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" event={"ID":"43fbd379-dd1e-4287-bd76-fd3ec51cde43","Type":"ContainerStarted","Data":"6f40c8c2653002ea6e916a625294f3f884745ae3fd33ab733118256908cbb925"} Mar 18 08:49:27.733141 master-0 kubenswrapper[7620]: I0318 08:49:27.724455 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" event={"ID":"f826efe0-60a1-4465-b8d0-d4069ed507a1","Type":"ContainerStarted","Data":"bd8df09b3d40d8724a2c10984cab0e740b4c3bee24eff013a5b0f567303ea479"} Mar 18 08:49:27.733141 master-0 kubenswrapper[7620]: I0318 08:49:27.724510 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" event={"ID":"f826efe0-60a1-4465-b8d0-d4069ed507a1","Type":"ContainerStarted","Data":"a4e62715769fe059f202ebd8f45a7d9a9cadff1b54a7a67c61a5164329d5818f"} Mar 18 08:49:27.733141 master-0 kubenswrapper[7620]: I0318 08:49:27.724763 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="e9a3f4dd-913d-4707-84c5-d64ead736f0f" containerName="installer" containerID="cri-o://5e0c3ea7554f76fe478ba87238a8f52a7e84e0ca4323bf58986273a5880e93c2" gracePeriod=30 Mar 18 08:49:27.744822 master-0 kubenswrapper[7620]: I0318 08:49:27.744750 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" podStartSLOduration=2.7447209040000002 podStartE2EDuration="2.744720904s" podCreationTimestamp="2026-03-18 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:27.744367043 +0000 UTC m=+31.739148795" watchObservedRunningTime="2026-03-18 08:49:27.744720904 +0000 UTC m=+31.739502656" Mar 18 08:49:27.816046 master-0 kubenswrapper[7620]: I0318 08:49:27.815958 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k"] Mar 18 08:49:27.819973 master-0 kubenswrapper[7620]: I0318 08:49:27.819109 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8"] Mar 18 08:49:27.889418 master-0 kubenswrapper[7620]: W0318 08:49:27.886482 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2700f537_8f31_4380_a527_3e697a8122cc.slice/crio-26ecaeebed65d3cea64cdc63150668e13ecd2fef68a18e11955a52673f9e9975 WatchSource:0}: Error finding container 26ecaeebed65d3cea64cdc63150668e13ecd2fef68a18e11955a52673f9e9975: Status 404 returned error can't find the container with id 26ecaeebed65d3cea64cdc63150668e13ecd2fef68a18e11955a52673f9e9975 Mar 18 08:49:28.233106 master-0 kubenswrapper[7620]: I0318 08:49:28.233064 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a491fb9d-b7c2-4086-8dd6-ba5a77dc446c" path="/var/lib/kubelet/pods/a491fb9d-b7c2-4086-8dd6-ba5a77dc446c/volumes" Mar 18 08:49:28.233814 master-0 kubenswrapper[7620]: I0318 08:49:28.233795 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9b74f8-6ea7-40cd-8b69-342972ab8889" path="/var/lib/kubelet/pods/cb9b74f8-6ea7-40cd-8b69-342972ab8889/volumes" Mar 18 08:49:28.733540 master-0 kubenswrapper[7620]: I0318 08:49:28.733266 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" event={"ID":"c228d525-5f89-4e64-bfb4-d4e837adc914","Type":"ContainerStarted","Data":"204acba76d27fe2916538e0022ca82c52cb428de76a6d66e0ad5f9b686ea78aa"} Mar 18 08:49:28.735900 master-0 kubenswrapper[7620]: I0318 08:49:28.734455 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" event={"ID":"2700f537-8f31-4380-a527-3e697a8122cc","Type":"ContainerStarted","Data":"26ecaeebed65d3cea64cdc63150668e13ecd2fef68a18e11955a52673f9e9975"} Mar 18 08:49:28.736131 master-0 kubenswrapper[7620]: I0318 08:49:28.736085 7620 generic.go:334] "Generic (PLEG): container finished" podID="b5f9f50b-e7b4-4b81-864b-349303f21447" containerID="589683df05fefda7629bb4e428ec6a4f619c8b88cea31f43af821234a93ed5bc" exitCode=0 Mar 18 08:49:28.736265 master-0 kubenswrapper[7620]: I0318 08:49:28.736197 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" event={"ID":"b5f9f50b-e7b4-4b81-864b-349303f21447","Type":"ContainerDied","Data":"589683df05fefda7629bb4e428ec6a4f619c8b88cea31f43af821234a93ed5bc"} Mar 18 08:49:28.758649 master-0 kubenswrapper[7620]: I0318 08:49:28.754554 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" event={"ID":"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab","Type":"ContainerStarted","Data":"e2fcc7a4d6dbfd86662e13aa47175b7098ad878c5514863b94465fb37fba3859"} Mar 18 08:49:28.758649 master-0 kubenswrapper[7620]: I0318 08:49:28.754830 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:28.766443 master-0 kubenswrapper[7620]: I0318 08:49:28.764531 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" event={"ID":"43fbd379-dd1e-4287-bd76-fd3ec51cde43","Type":"ContainerStarted","Data":"c87e465727f96804a91f8100c6f9f30efed35b12da82808b53f4872a9351ab90"} Mar 18 08:49:28.766443 master-0 kubenswrapper[7620]: I0318 08:49:28.764610 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" event={"ID":"43fbd379-dd1e-4287-bd76-fd3ec51cde43","Type":"ContainerStarted","Data":"d50a3f82b012286c5b3297047eb757c06794954c4d488429e7b15f6d773db1e4"} Mar 18 08:49:28.766443 master-0 kubenswrapper[7620]: I0318 08:49:28.764822 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:28.804306 master-0 kubenswrapper[7620]: I0318 08:49:28.804127 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" podStartSLOduration=3.804100814 podStartE2EDuration="3.804100814s" podCreationTimestamp="2026-03-18 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:28.780358636 +0000 UTC m=+32.775140398" watchObservedRunningTime="2026-03-18 08:49:28.804100814 +0000 UTC m=+32.798882576" Mar 18 08:49:28.805364 master-0 kubenswrapper[7620]: I0318 08:49:28.805275 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" podStartSLOduration=4.805267849 podStartE2EDuration="4.805267849s" podCreationTimestamp="2026-03-18 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:28.803325341 +0000 UTC m=+32.798107093" watchObservedRunningTime="2026-03-18 08:49:28.805267849 +0000 UTC m=+32.800049611" Mar 18 08:49:29.092369 master-0 kubenswrapper[7620]: I0318 08:49:29.092009 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:49:29.092369 master-0 kubenswrapper[7620]: I0318 08:49:29.092376 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:49:29.092605 master-0 kubenswrapper[7620]: I0318 08:49:29.092415 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:49:29.092605 master-0 kubenswrapper[7620]: I0318 08:49:29.092436 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:49:29.092605 master-0 kubenswrapper[7620]: I0318 08:49:29.092459 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:49:29.092605 master-0 kubenswrapper[7620]: I0318 08:49:29.092478 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:49:29.092605 master-0 kubenswrapper[7620]: I0318 08:49:29.092498 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:49:29.092605 master-0 kubenswrapper[7620]: I0318 08:49:29.092522 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:49:29.092605 master-0 kubenswrapper[7620]: I0318 08:49:29.092540 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:49:29.092605 master-0 kubenswrapper[7620]: I0318 08:49:29.092559 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:49:29.102110 master-0 kubenswrapper[7620]: I0318 08:49:29.101328 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:49:29.102110 master-0 kubenswrapper[7620]: I0318 08:49:29.101446 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:49:29.102110 master-0 kubenswrapper[7620]: I0318 08:49:29.101764 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-2cf64\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:49:29.103030 master-0 kubenswrapper[7620]: I0318 08:49:29.102994 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:49:29.103651 master-0 kubenswrapper[7620]: I0318 08:49:29.103611 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:49:29.103810 master-0 kubenswrapper[7620]: I0318 08:49:29.103775 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:49:29.108974 master-0 kubenswrapper[7620]: I0318 08:49:29.108933 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:49:29.124931 master-0 kubenswrapper[7620]: I0318 08:49:29.109495 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:49:29.124931 master-0 kubenswrapper[7620]: I0318 08:49:29.109954 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:49:29.132418 master-0 kubenswrapper[7620]: I0318 08:49:29.125916 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:49:29.269149 master-0 kubenswrapper[7620]: I0318 08:49:29.269038 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 08:49:29.269889 master-0 kubenswrapper[7620]: I0318 08:49:29.269488 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:49:29.269889 master-0 kubenswrapper[7620]: I0318 08:49:29.269761 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 08:49:29.272871 master-0 kubenswrapper[7620]: I0318 08:49:29.272370 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 08:49:29.279150 master-0 kubenswrapper[7620]: I0318 08:49:29.279112 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:49:29.282180 master-0 kubenswrapper[7620]: I0318 08:49:29.279179 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 08:49:29.282180 master-0 kubenswrapper[7620]: I0318 08:49:29.279519 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:49:29.282180 master-0 kubenswrapper[7620]: I0318 08:49:29.281362 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:49:29.284246 master-0 kubenswrapper[7620]: I0318 08:49:29.284206 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:49:29.285738 master-0 kubenswrapper[7620]: I0318 08:49:29.285638 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 08:49:29.673437 master-0 kubenswrapper[7620]: I0318 08:49:29.673381 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth"] Mar 18 08:49:29.757835 master-0 kubenswrapper[7620]: I0318 08:49:29.756010 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-856b445d89-8cfpd"] Mar 18 08:49:29.757835 master-0 kubenswrapper[7620]: I0318 08:49:29.756574 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.767688 master-0 kubenswrapper[7620]: I0318 08:49:29.765871 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:29.767688 master-0 kubenswrapper[7620]: I0318 08:49:29.766093 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:49:29.767688 master-0 kubenswrapper[7620]: I0318 08:49:29.766368 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:49:29.767688 master-0 kubenswrapper[7620]: I0318 08:49:29.766467 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:49:29.767688 master-0 kubenswrapper[7620]: I0318 08:49:29.766707 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:29.771352 master-0 kubenswrapper[7620]: I0318 08:49:29.771237 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:49:29.778950 master-0 kubenswrapper[7620]: I0318 08:49:29.776121 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856b445d89-8cfpd"] Mar 18 08:49:29.792372 master-0 kubenswrapper[7620]: I0318 08:49:29.791112 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" event={"ID":"7962fb40-1170-4c00-b1bf-92966aeae807","Type":"ContainerStarted","Data":"c28524ce9ebb8a89b175cc98bd1b1e9d4101033acc5d2f2a96632789a23b70d2"} Mar 18 08:49:29.809217 master-0 kubenswrapper[7620]: I0318 08:49:29.808051 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-config\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.809217 master-0 kubenswrapper[7620]: I0318 08:49:29.808128 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkgl\" (UniqueName: \"kubernetes.io/projected/56715c8c-c4dd-4912-b955-607a312bfcb6-kube-api-access-xbkgl\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.809217 master-0 kubenswrapper[7620]: I0318 08:49:29.808161 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56715c8c-c4dd-4912-b955-607a312bfcb6-serving-cert\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.809217 master-0 kubenswrapper[7620]: I0318 08:49:29.808199 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-proxy-ca-bundles\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.809217 master-0 kubenswrapper[7620]: I0318 08:49:29.808232 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-client-ca\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.809217 master-0 kubenswrapper[7620]: I0318 08:49:29.808520 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" event={"ID":"b5f9f50b-e7b4-4b81-864b-349303f21447","Type":"ContainerStarted","Data":"34094c1fbf914db0579a3a49ab1bfdd690044d344da2e4bd457e36a79c6cb34a"} Mar 18 08:49:29.809217 master-0 kubenswrapper[7620]: I0318 08:49:29.808569 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" event={"ID":"b5f9f50b-e7b4-4b81-864b-349303f21447","Type":"ContainerStarted","Data":"442b4a934be7c5383cd969644afa0555c85321a075c8d2a1950cea8b78d202a4"} Mar 18 08:49:29.839542 master-0 kubenswrapper[7620]: I0318 08:49:29.838783 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" podStartSLOduration=8.647596613 podStartE2EDuration="14.838755719s" podCreationTimestamp="2026-03-18 08:49:15 +0000 UTC" firstStartedPulling="2026-03-18 08:49:21.298322376 +0000 UTC m=+25.293104168" lastFinishedPulling="2026-03-18 08:49:27.489481522 +0000 UTC m=+31.484263274" observedRunningTime="2026-03-18 08:49:29.838529012 +0000 UTC m=+33.833310764" watchObservedRunningTime="2026-03-18 08:49:29.838755719 +0000 UTC m=+33.833537481" Mar 18 08:49:29.909650 master-0 kubenswrapper[7620]: I0318 08:49:29.909519 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-config\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.910482 master-0 kubenswrapper[7620]: I0318 08:49:29.910201 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkgl\" (UniqueName: \"kubernetes.io/projected/56715c8c-c4dd-4912-b955-607a312bfcb6-kube-api-access-xbkgl\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.910482 master-0 kubenswrapper[7620]: I0318 08:49:29.910254 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56715c8c-c4dd-4912-b955-607a312bfcb6-serving-cert\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.910482 master-0 kubenswrapper[7620]: I0318 08:49:29.910337 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-proxy-ca-bundles\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.910482 master-0 kubenswrapper[7620]: I0318 08:49:29.910465 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-client-ca\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.910955 master-0 kubenswrapper[7620]: I0318 08:49:29.910896 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-config\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.911486 master-0 kubenswrapper[7620]: I0318 08:49:29.911439 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-client-ca\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.913153 master-0 kubenswrapper[7620]: I0318 08:49:29.913117 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-proxy-ca-bundles\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.919905 master-0 kubenswrapper[7620]: I0318 08:49:29.919831 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56715c8c-c4dd-4912-b955-607a312bfcb6-serving-cert\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:29.927141 master-0 kubenswrapper[7620]: I0318 08:49:29.926954 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkgl\" (UniqueName: \"kubernetes.io/projected/56715c8c-c4dd-4912-b955-607a312bfcb6-kube-api-access-xbkgl\") pod \"controller-manager-856b445d89-8cfpd\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:30.088985 master-0 kubenswrapper[7620]: I0318 08:49:30.088817 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf"] Mar 18 08:49:30.088985 master-0 kubenswrapper[7620]: I0318 08:49:30.088905 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-bcwsv"] Mar 18 08:49:30.100943 master-0 kubenswrapper[7620]: I0318 08:49:30.100896 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-7h94d"] Mar 18 08:49:30.112254 master-0 kubenswrapper[7620]: I0318 08:49:30.112195 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:30.164478 master-0 kubenswrapper[7620]: I0318 08:49:30.164406 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6"] Mar 18 08:49:30.267589 master-0 kubenswrapper[7620]: I0318 08:49:30.265534 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh"] Mar 18 08:49:30.269153 master-0 kubenswrapper[7620]: I0318 08:49:30.269123 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:30.270010 master-0 kubenswrapper[7620]: I0318 08:49:30.269882 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr"] Mar 18 08:49:30.270073 master-0 kubenswrapper[7620]: I0318 08:49:30.270016 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.278868 master-0 kubenswrapper[7620]: I0318 08:49:30.277931 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64"] Mar 18 08:49:30.289931 master-0 kubenswrapper[7620]: I0318 08:49:30.279977 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:30.289931 master-0 kubenswrapper[7620]: I0318 08:49:30.281467 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6x85n"] Mar 18 08:49:30.298919 master-0 kubenswrapper[7620]: I0318 08:49:30.294545 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-b9pn7"] Mar 18 08:49:30.344940 master-0 kubenswrapper[7620]: I0318 08:49:30.344751 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-var-lock\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.345236 master-0 kubenswrapper[7620]: I0318 08:49:30.345222 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.345323 master-0 kubenswrapper[7620]: I0318 08:49:30.345309 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5496fa70-0f35-4034-a4bf-1479718a684a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.447847 master-0 kubenswrapper[7620]: I0318 08:49:30.447679 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-var-lock\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.447847 master-0 kubenswrapper[7620]: I0318 08:49:30.447821 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.447847 master-0 kubenswrapper[7620]: I0318 08:49:30.447879 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-var-lock\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.448314 master-0 kubenswrapper[7620]: I0318 08:49:30.447976 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5496fa70-0f35-4034-a4bf-1479718a684a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.448314 master-0 kubenswrapper[7620]: I0318 08:49:30.447997 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.465114 master-0 kubenswrapper[7620]: I0318 08:49:30.465066 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5496fa70-0f35-4034-a4bf-1479718a684a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.595400 master-0 kubenswrapper[7620]: I0318 08:49:30.594936 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:30.846723 master-0 kubenswrapper[7620]: W0318 08:49:30.846020 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9fe248_ba87_47e3_911a_1b2b112b5683.slice/crio-d9f6591fd179f080128bbdecaa328db0f824489c21d34724dd9ae09d41418d2c WatchSource:0}: Error finding container d9f6591fd179f080128bbdecaa328db0f824489c21d34724dd9ae09d41418d2c: Status 404 returned error can't find the container with id d9f6591fd179f080128bbdecaa328db0f824489c21d34724dd9ae09d41418d2c Mar 18 08:49:31.764137 master-0 kubenswrapper[7620]: I0318 08:49:31.760541 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg"] Mar 18 08:49:31.764137 master-0 kubenswrapper[7620]: I0318 08:49:31.760827 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" podUID="3d0b7f60-c32e-48a6-b9e9-87c8f018367d" containerName="cluster-version-operator" containerID="cri-o://15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8" gracePeriod=130 Mar 18 08:49:31.837806 master-0 kubenswrapper[7620]: I0318 08:49:31.837724 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" event={"ID":"159a26f5-3cfc-4db2-88e9-bff5d8a613fc","Type":"ContainerStarted","Data":"c7ad11be2f6e88d66c43f7a470d644f901fa421f8c0602a3500be8ddd4c38ee6"} Mar 18 08:49:31.843319 master-0 kubenswrapper[7620]: I0318 08:49:31.843257 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" event={"ID":"b065df33-7911-456e-b3a2-1f8c8d53e053","Type":"ContainerStarted","Data":"7d69a2aa0453ffd9d52f608b0f589cc8cbacbdbc94e468d5326ece0a3282eddd"} Mar 18 08:49:31.845911 master-0 kubenswrapper[7620]: I0318 08:49:31.845826 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6x85n" event={"ID":"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29","Type":"ContainerStarted","Data":"8f214df22b3108e2647e81c2065b29247bcd16b9d799cc094aa75352fed33b39"} Mar 18 08:49:31.847747 master-0 kubenswrapper[7620]: I0318 08:49:31.847639 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" event={"ID":"3d9fe248-ba87-47e3-911a-1b2b112b5683","Type":"ContainerStarted","Data":"d9f6591fd179f080128bbdecaa328db0f824489c21d34724dd9ae09d41418d2c"} Mar 18 08:49:31.849295 master-0 kubenswrapper[7620]: I0318 08:49:31.849247 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" event={"ID":"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe","Type":"ContainerStarted","Data":"55b41391fdb5cf271845bf26cd3e0f895b338fd5cf036e303350901534473728"} Mar 18 08:49:31.853794 master-0 kubenswrapper[7620]: I0318 08:49:31.852228 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerStarted","Data":"fb93ae4071b146962466e96a3daecbc8c529d6e1a15ad1edfa1a28da5c544561"} Mar 18 08:49:31.859542 master-0 kubenswrapper[7620]: I0318 08:49:31.859467 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" event={"ID":"e7b72267-fc08-41ed-a92b-9fca7372aba6","Type":"ContainerStarted","Data":"b273b68e51f7dadf9df698a73d4ce02f6814882dc729b2c52672e829413c2a75"} Mar 18 08:49:31.862796 master-0 kubenswrapper[7620]: I0318 08:49:31.862712 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" event={"ID":"59d50dd5-6793-4f96-a769-31e086ecc7e4","Type":"ContainerStarted","Data":"ea87280c188a798da95cc9ce18e125174ff632d343ee3e8d6a214207d7770e1e"} Mar 18 08:49:31.870372 master-0 kubenswrapper[7620]: I0318 08:49:31.870304 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" event={"ID":"e025d334-20e7-491f-8027-194251398747","Type":"ContainerStarted","Data":"176bf98298dce9ebeff9e6cf55f250f7b8583bdf4845838e239879972b0093f1"} Mar 18 08:49:32.046996 master-0 kubenswrapper[7620]: I0318 08:49:32.046944 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:49:32.167045 master-0 kubenswrapper[7620]: I0318 08:49:32.166996 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 08:49:32.167397 master-0 kubenswrapper[7620]: E0318 08:49:32.167210 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0b7f60-c32e-48a6-b9e9-87c8f018367d" containerName="cluster-version-operator" Mar 18 08:49:32.167397 master-0 kubenswrapper[7620]: I0318 08:49:32.167228 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0b7f60-c32e-48a6-b9e9-87c8f018367d" containerName="cluster-version-operator" Mar 18 08:49:32.167397 master-0 kubenswrapper[7620]: I0318 08:49:32.167320 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0b7f60-c32e-48a6-b9e9-87c8f018367d" containerName="cluster-version-operator" Mar 18 08:49:32.167772 master-0 kubenswrapper[7620]: I0318 08:49:32.167744 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.172384 master-0 kubenswrapper[7620]: I0318 08:49:32.172342 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 08:49:32.177245 master-0 kubenswrapper[7620]: I0318 08:49:32.177208 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access\") pod \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " Mar 18 08:49:32.177301 master-0 kubenswrapper[7620]: I0318 08:49:32.177264 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads\") pod \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " Mar 18 08:49:32.177301 master-0 kubenswrapper[7620]: I0318 08:49:32.177295 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca\") pod \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " Mar 18 08:49:32.177370 master-0 kubenswrapper[7620]: I0318 08:49:32.177336 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs\") pod \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " Mar 18 08:49:32.177402 master-0 kubenswrapper[7620]: I0318 08:49:32.177385 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") pod \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\" (UID: \"3d0b7f60-c32e-48a6-b9e9-87c8f018367d\") " Mar 18 08:49:32.177434 master-0 kubenswrapper[7620]: I0318 08:49:32.177387 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "3d0b7f60-c32e-48a6-b9e9-87c8f018367d" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:32.177518 master-0 kubenswrapper[7620]: I0318 08:49:32.177478 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "3d0b7f60-c32e-48a6-b9e9-87c8f018367d" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:32.177644 master-0 kubenswrapper[7620]: I0318 08:49:32.177616 7620 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:32.177644 master-0 kubenswrapper[7620]: I0318 08:49:32.177639 7620 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:32.180709 master-0 kubenswrapper[7620]: I0318 08:49:32.178918 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca" (OuterVolumeSpecName: "service-ca") pod "3d0b7f60-c32e-48a6-b9e9-87c8f018367d" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:32.183707 master-0 kubenswrapper[7620]: I0318 08:49:32.183334 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 08:49:32.189688 master-0 kubenswrapper[7620]: I0318 08:49:32.189620 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d0b7f60-c32e-48a6-b9e9-87c8f018367d" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:32.206705 master-0 kubenswrapper[7620]: I0318 08:49:32.206662 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d0b7f60-c32e-48a6-b9e9-87c8f018367d" (UID: "3d0b7f60-c32e-48a6-b9e9-87c8f018367d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:32.280968 master-0 kubenswrapper[7620]: I0318 08:49:32.280295 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.280968 master-0 kubenswrapper[7620]: I0318 08:49:32.280684 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-var-lock\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.280968 master-0 kubenswrapper[7620]: I0318 08:49:32.280794 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1edfa49b-d0e7-4324-aace-b115b41ddae0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.280968 master-0 kubenswrapper[7620]: I0318 08:49:32.280982 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:32.280968 master-0 kubenswrapper[7620]: I0318 08:49:32.280997 7620 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:32.280968 master-0 kubenswrapper[7620]: I0318 08:49:32.281008 7620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0b7f60-c32e-48a6-b9e9-87c8f018367d-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:32.306049 master-0 kubenswrapper[7620]: I0318 08:49:32.305421 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856b445d89-8cfpd"] Mar 18 08:49:32.382001 master-0 kubenswrapper[7620]: I0318 08:49:32.381935 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.382261 master-0 kubenswrapper[7620]: I0318 08:49:32.382014 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-var-lock\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.382261 master-0 kubenswrapper[7620]: I0318 08:49:32.382048 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1edfa49b-d0e7-4324-aace-b115b41ddae0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.391876 master-0 kubenswrapper[7620]: I0318 08:49:32.382473 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.391876 master-0 kubenswrapper[7620]: I0318 08:49:32.382516 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-var-lock\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.405726 master-0 kubenswrapper[7620]: I0318 08:49:32.405676 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:32.440689 master-0 kubenswrapper[7620]: I0318 08:49:32.440084 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1edfa49b-d0e7-4324-aace-b115b41ddae0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.493013 master-0 kubenswrapper[7620]: I0318 08:49:32.492951 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:32.970576 master-0 kubenswrapper[7620]: I0318 08:49:32.970501 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" event={"ID":"56715c8c-c4dd-4912-b955-607a312bfcb6","Type":"ContainerStarted","Data":"927ee5cc9486163ad344533e90c42ad0962670bace804e14c58b10d4a343dc45"} Mar 18 08:49:32.972928 master-0 kubenswrapper[7620]: I0318 08:49:32.972823 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" event={"ID":"59d50dd5-6793-4f96-a769-31e086ecc7e4","Type":"ContainerStarted","Data":"45b569c3ea9ad96255f490fdc2f89f56b2b2281b82c08e9724dd46ba8c1e91db"} Mar 18 08:49:32.975385 master-0 kubenswrapper[7620]: I0318 08:49:32.975352 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" event={"ID":"c228d525-5f89-4e64-bfb4-d4e837adc914","Type":"ContainerStarted","Data":"ef10cd29586147f010847b50ad7cc6d256bd7d1e25326c4dfc45c8258c15465a"} Mar 18 08:49:32.975777 master-0 kubenswrapper[7620]: I0318 08:49:32.975731 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:32.992302 7620 generic.go:334] "Generic (PLEG): container finished" podID="2700f537-8f31-4380-a527-3e697a8122cc" containerID="fa4ea33fa46744eacabcd0bcd52fb003649aa1cd4700008b10cea57f832bf122" exitCode=0 Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:32.992416 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" event={"ID":"2700f537-8f31-4380-a527-3e697a8122cc","Type":"ContainerDied","Data":"fa4ea33fa46744eacabcd0bcd52fb003649aa1cd4700008b10cea57f832bf122"} Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:32.993932 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:32.995058 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"5496fa70-0f35-4034-a4bf-1479718a684a","Type":"ContainerStarted","Data":"aea287e1ab30cf2929aa7b827701f645b74f06f1e24a804b1082378303991422"} Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:33.003949 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" podStartSLOduration=4.059704033 podStartE2EDuration="8.00392546s" podCreationTimestamp="2026-03-18 08:49:25 +0000 UTC" firstStartedPulling="2026-03-18 08:49:27.890442209 +0000 UTC m=+31.885223961" lastFinishedPulling="2026-03-18 08:49:31.834663636 +0000 UTC m=+35.829445388" observedRunningTime="2026-03-18 08:49:33.002947212 +0000 UTC m=+36.997728974" watchObservedRunningTime="2026-03-18 08:49:33.00392546 +0000 UTC m=+36.998707212" Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:33.004524 7620 generic.go:334] "Generic (PLEG): container finished" podID="3d0b7f60-c32e-48a6-b9e9-87c8f018367d" containerID="15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8" exitCode=0 Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:33.004561 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" event={"ID":"3d0b7f60-c32e-48a6-b9e9-87c8f018367d","Type":"ContainerDied","Data":"15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8"} Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:33.004595 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" event={"ID":"3d0b7f60-c32e-48a6-b9e9-87c8f018367d","Type":"ContainerDied","Data":"ac096d70d81e7801442d61c8ffa707b3be42916eaae60f62fcab780efe8be51f"} Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:33.004618 7620 scope.go:117] "RemoveContainer" containerID="15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8" Mar 18 08:49:33.007641 master-0 kubenswrapper[7620]: I0318 08:49:33.004752 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg" Mar 18 08:49:33.047697 master-0 kubenswrapper[7620]: I0318 08:49:33.047656 7620 scope.go:117] "RemoveContainer" containerID="15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8" Mar 18 08:49:33.051169 master-0 kubenswrapper[7620]: E0318 08:49:33.051096 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8\": container with ID starting with 15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8 not found: ID does not exist" containerID="15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8" Mar 18 08:49:33.051307 master-0 kubenswrapper[7620]: I0318 08:49:33.051175 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8"} err="failed to get container status \"15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8\": rpc error: code = NotFound desc = could not find container \"15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8\": container with ID starting with 15e3021cb2dfbdd3656c892ad4c9383f0fbdf22535a1b291b4706db5c93981e8 not found: ID does not exist" Mar 18 08:49:33.105551 master-0 kubenswrapper[7620]: I0318 08:49:33.105157 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg"] Mar 18 08:49:33.113138 master-0 kubenswrapper[7620]: I0318 08:49:33.112269 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-2xjqg"] Mar 18 08:49:34.159366 master-0 kubenswrapper[7620]: I0318 08:49:34.158895 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:34.159366 master-0 kubenswrapper[7620]: I0318 08:49:34.158942 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:34.198000 master-0 kubenswrapper[7620]: I0318 08:49:34.196634 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:34.220873 master-0 kubenswrapper[7620]: I0318 08:49:34.219722 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-8btcx"] Mar 18 08:49:34.220873 master-0 kubenswrapper[7620]: I0318 08:49:34.220529 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.244865 master-0 kubenswrapper[7620]: I0318 08:49:34.241171 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 08:49:34.244865 master-0 kubenswrapper[7620]: I0318 08:49:34.241908 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 08:49:34.244865 master-0 kubenswrapper[7620]: I0318 08:49:34.242133 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 08:49:34.279300 master-0 kubenswrapper[7620]: I0318 08:49:34.279249 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.279300 master-0 kubenswrapper[7620]: I0318 08:49:34.279336 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-serving-cert\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.281274 master-0 kubenswrapper[7620]: I0318 08:49:34.279367 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-kube-api-access\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.285084 master-0 kubenswrapper[7620]: I0318 08:49:34.285025 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.285183 master-0 kubenswrapper[7620]: I0318 08:49:34.285164 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-service-ca\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.299628 master-0 kubenswrapper[7620]: I0318 08:49:34.297267 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d0b7f60-c32e-48a6-b9e9-87c8f018367d" path="/var/lib/kubelet/pods/3d0b7f60-c32e-48a6-b9e9-87c8f018367d/volumes" Mar 18 08:49:34.299628 master-0 kubenswrapper[7620]: I0318 08:49:34.298362 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 08:49:34.315153 master-0 kubenswrapper[7620]: I0318 08:49:34.313987 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 08:49:34.398265 master-0 kubenswrapper[7620]: I0318 08:49:34.395099 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-service-ca\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.398265 master-0 kubenswrapper[7620]: I0318 08:49:34.395173 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.398265 master-0 kubenswrapper[7620]: I0318 08:49:34.395205 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-serving-cert\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.398265 master-0 kubenswrapper[7620]: I0318 08:49:34.395239 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-kube-api-access\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.398265 master-0 kubenswrapper[7620]: I0318 08:49:34.395296 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.398265 master-0 kubenswrapper[7620]: I0318 08:49:34.397879 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.404462 master-0 kubenswrapper[7620]: I0318 08:49:34.401515 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-service-ca\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.407334 master-0 kubenswrapper[7620]: I0318 08:49:34.407285 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.422434 master-0 kubenswrapper[7620]: I0318 08:49:34.421094 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-serving-cert\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.443107 master-0 kubenswrapper[7620]: I0318 08:49:34.443035 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-kube-api-access\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.557313 master-0 kubenswrapper[7620]: I0318 08:49:34.551988 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 08:49:34.599223 master-0 kubenswrapper[7620]: W0318 08:49:34.599153 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d89af2f_47f5_4ee5_a790_e162c2dee3ce.slice/crio-9edfccecec2ce83d19d6f04be10c237136ad19be78d3969b003d45d0dd5cdd53 WatchSource:0}: Error finding container 9edfccecec2ce83d19d6f04be10c237136ad19be78d3969b003d45d0dd5cdd53: Status 404 returned error can't find the container with id 9edfccecec2ce83d19d6f04be10c237136ad19be78d3969b003d45d0dd5cdd53 Mar 18 08:49:34.647390 master-0 kubenswrapper[7620]: I0318 08:49:34.646869 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 08:49:35.295021 master-0 kubenswrapper[7620]: I0318 08:49:35.294967 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" event={"ID":"8d89af2f-47f5-4ee5-a790-e162c2dee3ce","Type":"ContainerStarted","Data":"992e3b14353f7bb3a1fdc040e5947a0af1a78c56e4e606f7c717b026c7eff5cf"} Mar 18 08:49:35.295021 master-0 kubenswrapper[7620]: I0318 08:49:35.295018 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" event={"ID":"8d89af2f-47f5-4ee5-a790-e162c2dee3ce","Type":"ContainerStarted","Data":"9edfccecec2ce83d19d6f04be10c237136ad19be78d3969b003d45d0dd5cdd53"} Mar 18 08:49:35.298436 master-0 kubenswrapper[7620]: I0318 08:49:35.298344 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" event={"ID":"2700f537-8f31-4380-a527-3e697a8122cc","Type":"ContainerStarted","Data":"52a9ff35a25e44adf1c93bdd6ce6f37cf66bf0985108220cfcea8712ebe6ab55"} Mar 18 08:49:35.300927 master-0 kubenswrapper[7620]: I0318 08:49:35.300888 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"5496fa70-0f35-4034-a4bf-1479718a684a","Type":"ContainerStarted","Data":"962f7b366c9e49db0ee412a362c9122983477cb24ae36315091977aadc600f6b"} Mar 18 08:49:35.303727 master-0 kubenswrapper[7620]: I0318 08:49:35.303688 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"1edfa49b-d0e7-4324-aace-b115b41ddae0","Type":"ContainerStarted","Data":"91060a1df8ac508bd63d3fe87c3026c13bbc60c7a49e9b85f1b8ff384fcdd40b"} Mar 18 08:49:35.303825 master-0 kubenswrapper[7620]: I0318 08:49:35.303732 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"1edfa49b-d0e7-4324-aace-b115b41ddae0","Type":"ContainerStarted","Data":"be0a7a0ac0aa5258d96034f680e2106c4672594f5322381bd2ce5d9a5f255068"} Mar 18 08:49:35.313235 master-0 kubenswrapper[7620]: I0318 08:49:35.312115 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" podStartSLOduration=2.312097569 podStartE2EDuration="2.312097569s" podCreationTimestamp="2026-03-18 08:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:35.309946876 +0000 UTC m=+39.304728638" watchObservedRunningTime="2026-03-18 08:49:35.312097569 +0000 UTC m=+39.306879321" Mar 18 08:49:35.332658 master-0 kubenswrapper[7620]: I0318 08:49:35.332583 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=3.332564742 podStartE2EDuration="3.332564742s" podCreationTimestamp="2026-03-18 08:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:35.331566273 +0000 UTC m=+39.326348015" watchObservedRunningTime="2026-03-18 08:49:35.332564742 +0000 UTC m=+39.327346494" Mar 18 08:49:35.361161 master-0 kubenswrapper[7620]: I0318 08:49:35.361088 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" podStartSLOduration=6.408096227 podStartE2EDuration="10.361066298s" podCreationTimestamp="2026-03-18 08:49:25 +0000 UTC" firstStartedPulling="2026-03-18 08:49:27.890248123 +0000 UTC m=+31.885029875" lastFinishedPulling="2026-03-18 08:49:31.843218194 +0000 UTC m=+35.837999946" observedRunningTime="2026-03-18 08:49:35.359971056 +0000 UTC m=+39.354752808" watchObservedRunningTime="2026-03-18 08:49:35.361066298 +0000 UTC m=+39.355848050" Mar 18 08:49:35.386871 master-0 kubenswrapper[7620]: I0318 08:49:35.386781 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=5.386762622 podStartE2EDuration="5.386762622s" podCreationTimestamp="2026-03-18 08:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:35.386053542 +0000 UTC m=+39.380835294" watchObservedRunningTime="2026-03-18 08:49:35.386762622 +0000 UTC m=+39.381544374" Mar 18 08:49:35.507506 master-0 kubenswrapper[7620]: I0318 08:49:35.507456 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:49:35.975484 master-0 kubenswrapper[7620]: I0318 08:49:35.975410 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:35.975484 master-0 kubenswrapper[7620]: I0318 08:49:35.975482 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:35.994472 master-0 kubenswrapper[7620]: I0318 08:49:35.990675 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:36.323706 master-0 kubenswrapper[7620]: I0318 08:49:36.322820 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 08:49:36.881469 master-0 kubenswrapper[7620]: I0318 08:49:36.881372 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:49:38.048367 master-0 kubenswrapper[7620]: I0318 08:49:38.048302 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:38.048926 master-0 kubenswrapper[7620]: I0318 08:49:38.048547 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="5496fa70-0f35-4034-a4bf-1479718a684a" containerName="installer" containerID="cri-o://962f7b366c9e49db0ee412a362c9122983477cb24ae36315091977aadc600f6b" gracePeriod=30 Mar 18 08:49:39.648949 master-0 kubenswrapper[7620]: I0318 08:49:39.648890 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:49:39.649663 master-0 kubenswrapper[7620]: I0318 08:49:39.649638 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.651877 master-0 kubenswrapper[7620]: I0318 08:49:39.651799 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:39.663245 master-0 kubenswrapper[7620]: I0318 08:49:39.663057 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:49:39.682091 master-0 kubenswrapper[7620]: I0318 08:49:39.682038 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-var-lock\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.682091 master-0 kubenswrapper[7620]: I0318 08:49:39.682096 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ace4267e-c38d-46dd-9de6-c23339729a8b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.682373 master-0 kubenswrapper[7620]: I0318 08:49:39.682304 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.783213 master-0 kubenswrapper[7620]: I0318 08:49:39.783142 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-var-lock\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.783213 master-0 kubenswrapper[7620]: I0318 08:49:39.783201 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ace4267e-c38d-46dd-9de6-c23339729a8b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.783504 master-0 kubenswrapper[7620]: I0318 08:49:39.783385 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-var-lock\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.783547 master-0 kubenswrapper[7620]: I0318 08:49:39.783406 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.783659 master-0 kubenswrapper[7620]: I0318 08:49:39.783472 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:39.804057 master-0 kubenswrapper[7620]: I0318 08:49:39.803985 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ace4267e-c38d-46dd-9de6-c23339729a8b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:40.059694 master-0 kubenswrapper[7620]: I0318 08:49:40.059648 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 08:49:40.060768 master-0 kubenswrapper[7620]: I0318 08:49:40.060746 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.065102 master-0 kubenswrapper[7620]: I0318 08:49:40.065073 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:40.173846 master-0 kubenswrapper[7620]: I0318 08:49:40.170604 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 08:49:40.510217 master-0 kubenswrapper[7620]: I0318 08:49:40.500380 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6fb9336-3f19-4220-93ee-a5a61e26340b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.510217 master-0 kubenswrapper[7620]: I0318 08:49:40.500427 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-var-lock\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.510217 master-0 kubenswrapper[7620]: I0318 08:49:40.500456 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.604186 master-0 kubenswrapper[7620]: I0318 08:49:40.602321 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6fb9336-3f19-4220-93ee-a5a61e26340b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.604186 master-0 kubenswrapper[7620]: I0318 08:49:40.602369 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-var-lock\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.604186 master-0 kubenswrapper[7620]: I0318 08:49:40.602391 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.604186 master-0 kubenswrapper[7620]: I0318 08:49:40.602466 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.604186 master-0 kubenswrapper[7620]: I0318 08:49:40.602782 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-var-lock\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.614627 master-0 kubenswrapper[7620]: I0318 08:49:40.614577 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856b445d89-8cfpd"] Mar 18 08:49:40.627718 master-0 kubenswrapper[7620]: I0318 08:49:40.627673 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6fb9336-3f19-4220-93ee-a5a61e26340b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:40.790513 master-0 kubenswrapper[7620]: I0318 08:49:40.790379 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k"] Mar 18 08:49:40.791017 master-0 kubenswrapper[7620]: I0318 08:49:40.790648 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" podUID="c228d525-5f89-4e64-bfb4-d4e837adc914" containerName="route-controller-manager" containerID="cri-o://ef10cd29586147f010847b50ad7cc6d256bd7d1e25326c4dfc45c8258c15465a" gracePeriod=30 Mar 18 08:49:40.940294 master-0 kubenswrapper[7620]: I0318 08:49:40.940178 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:43.531390 master-0 kubenswrapper[7620]: I0318 08:49:43.531297 7620 generic.go:334] "Generic (PLEG): container finished" podID="c228d525-5f89-4e64-bfb4-d4e837adc914" containerID="ef10cd29586147f010847b50ad7cc6d256bd7d1e25326c4dfc45c8258c15465a" exitCode=0 Mar 18 08:49:43.531895 master-0 kubenswrapper[7620]: I0318 08:49:43.531431 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" event={"ID":"c228d525-5f89-4e64-bfb4-d4e837adc914","Type":"ContainerDied","Data":"ef10cd29586147f010847b50ad7cc6d256bd7d1e25326c4dfc45c8258c15465a"} Mar 18 08:49:43.532967 master-0 kubenswrapper[7620]: I0318 08:49:43.532931 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_5496fa70-0f35-4034-a4bf-1479718a684a/installer/0.log" Mar 18 08:49:43.533029 master-0 kubenswrapper[7620]: I0318 08:49:43.532977 7620 generic.go:334] "Generic (PLEG): container finished" podID="5496fa70-0f35-4034-a4bf-1479718a684a" containerID="962f7b366c9e49db0ee412a362c9122983477cb24ae36315091977aadc600f6b" exitCode=1 Mar 18 08:49:43.533029 master-0 kubenswrapper[7620]: I0318 08:49:43.533008 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"5496fa70-0f35-4034-a4bf-1479718a684a","Type":"ContainerDied","Data":"962f7b366c9e49db0ee412a362c9122983477cb24ae36315091977aadc600f6b"} Mar 18 08:49:46.359921 master-0 kubenswrapper[7620]: I0318 08:49:46.359833 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_5496fa70-0f35-4034-a4bf-1479718a684a/installer/0.log" Mar 18 08:49:46.360462 master-0 kubenswrapper[7620]: I0318 08:49:46.359937 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:46.373253 master-0 kubenswrapper[7620]: I0318 08:49:46.373183 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.542962 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-var-lock" (OuterVolumeSpecName: "var-lock") pod "5496fa70-0f35-4034-a4bf-1479718a684a" (UID: "5496fa70-0f35-4034-a4bf-1479718a684a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.543099 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-var-lock\") pod \"5496fa70-0f35-4034-a4bf-1479718a684a\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.543130 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c228d525-5f89-4e64-bfb4-d4e837adc914-serving-cert\") pod \"c228d525-5f89-4e64-bfb4-d4e837adc914\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.543166 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-kubelet-dir\") pod \"5496fa70-0f35-4034-a4bf-1479718a684a\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.543206 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5496fa70-0f35-4034-a4bf-1479718a684a-kube-api-access\") pod \"5496fa70-0f35-4034-a4bf-1479718a684a\" (UID: \"5496fa70-0f35-4034-a4bf-1479718a684a\") " Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.543235 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-config\") pod \"c228d525-5f89-4e64-bfb4-d4e837adc914\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.543284 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-client-ca\") pod \"c228d525-5f89-4e64-bfb4-d4e837adc914\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.543298 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g44q\" (UniqueName: \"kubernetes.io/projected/c228d525-5f89-4e64-bfb4-d4e837adc914-kube-api-access-4g44q\") pod \"c228d525-5f89-4e64-bfb4-d4e837adc914\" (UID: \"c228d525-5f89-4e64-bfb4-d4e837adc914\") " Mar 18 08:49:46.543794 master-0 kubenswrapper[7620]: I0318 08:49:46.543513 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:46.550639 master-0 kubenswrapper[7620]: I0318 08:49:46.550580 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-config" (OuterVolumeSpecName: "config") pod "c228d525-5f89-4e64-bfb4-d4e837adc914" (UID: "c228d525-5f89-4e64-bfb4-d4e837adc914"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:46.550757 master-0 kubenswrapper[7620]: I0318 08:49:46.550667 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5496fa70-0f35-4034-a4bf-1479718a684a" (UID: "5496fa70-0f35-4034-a4bf-1479718a684a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:46.552981 master-0 kubenswrapper[7620]: I0318 08:49:46.552930 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-client-ca" (OuterVolumeSpecName: "client-ca") pod "c228d525-5f89-4e64-bfb4-d4e837adc914" (UID: "c228d525-5f89-4e64-bfb4-d4e837adc914"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:46.553171 master-0 kubenswrapper[7620]: I0318 08:49:46.553139 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c228d525-5f89-4e64-bfb4-d4e837adc914-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c228d525-5f89-4e64-bfb4-d4e837adc914" (UID: "c228d525-5f89-4e64-bfb4-d4e837adc914"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:46.556722 master-0 kubenswrapper[7620]: I0318 08:49:46.556659 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5496fa70-0f35-4034-a4bf-1479718a684a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5496fa70-0f35-4034-a4bf-1479718a684a" (UID: "5496fa70-0f35-4034-a4bf-1479718a684a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:46.557840 master-0 kubenswrapper[7620]: I0318 08:49:46.557778 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c228d525-5f89-4e64-bfb4-d4e837adc914-kube-api-access-4g44q" (OuterVolumeSpecName: "kube-api-access-4g44q") pod "c228d525-5f89-4e64-bfb4-d4e837adc914" (UID: "c228d525-5f89-4e64-bfb4-d4e837adc914"). InnerVolumeSpecName "kube-api-access-4g44q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:46.599675 master-0 kubenswrapper[7620]: I0318 08:49:46.599634 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" event={"ID":"c228d525-5f89-4e64-bfb4-d4e837adc914","Type":"ContainerDied","Data":"204acba76d27fe2916538e0022ca82c52cb428de76a6d66e0ad5f9b686ea78aa"} Mar 18 08:49:46.599763 master-0 kubenswrapper[7620]: I0318 08:49:46.599699 7620 scope.go:117] "RemoveContainer" containerID="ef10cd29586147f010847b50ad7cc6d256bd7d1e25326c4dfc45c8258c15465a" Mar 18 08:49:46.599839 master-0 kubenswrapper[7620]: I0318 08:49:46.599808 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k" Mar 18 08:49:46.612238 master-0 kubenswrapper[7620]: I0318 08:49:46.612197 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_5496fa70-0f35-4034-a4bf-1479718a684a/installer/0.log" Mar 18 08:49:46.612347 master-0 kubenswrapper[7620]: I0318 08:49:46.612279 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"5496fa70-0f35-4034-a4bf-1479718a684a","Type":"ContainerDied","Data":"aea287e1ab30cf2929aa7b827701f645b74f06f1e24a804b1082378303991422"} Mar 18 08:49:46.612425 master-0 kubenswrapper[7620]: I0318 08:49:46.612388 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:46.647079 master-0 kubenswrapper[7620]: I0318 08:49:46.644690 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5496fa70-0f35-4034-a4bf-1479718a684a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:46.647079 master-0 kubenswrapper[7620]: I0318 08:49:46.644772 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:46.647079 master-0 kubenswrapper[7620]: I0318 08:49:46.644784 7620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c228d525-5f89-4e64-bfb4-d4e837adc914-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:46.647079 master-0 kubenswrapper[7620]: I0318 08:49:46.644798 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g44q\" (UniqueName: \"kubernetes.io/projected/c228d525-5f89-4e64-bfb4-d4e837adc914-kube-api-access-4g44q\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:46.647079 master-0 kubenswrapper[7620]: I0318 08:49:46.644814 7620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c228d525-5f89-4e64-bfb4-d4e837adc914-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:46.647079 master-0 kubenswrapper[7620]: I0318 08:49:46.645202 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5496fa70-0f35-4034-a4bf-1479718a684a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:46.694613 master-0 kubenswrapper[7620]: I0318 08:49:46.694515 7620 scope.go:117] "RemoveContainer" containerID="962f7b366c9e49db0ee412a362c9122983477cb24ae36315091977aadc600f6b" Mar 18 08:49:46.723306 master-0 kubenswrapper[7620]: I0318 08:49:46.723250 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k"] Mar 18 08:49:46.737224 master-0 kubenswrapper[7620]: I0318 08:49:46.730872 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d8d8dd479-7jj4k"] Mar 18 08:49:46.765987 master-0 kubenswrapper[7620]: I0318 08:49:46.765571 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:46.775331 master-0 kubenswrapper[7620]: I0318 08:49:46.774977 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:46.777080 master-0 kubenswrapper[7620]: I0318 08:49:46.777028 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 08:49:46.881989 master-0 kubenswrapper[7620]: I0318 08:49:46.879948 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:49:46.898039 master-0 kubenswrapper[7620]: W0318 08:49:46.897604 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podace4267e_c38d_46dd_9de6_c23339729a8b.slice/crio-080fb9efe85e13956d4489a8523ef6b21588e8f16588b91bc928b76f222370cb WatchSource:0}: Error finding container 080fb9efe85e13956d4489a8523ef6b21588e8f16588b91bc928b76f222370cb: Status 404 returned error can't find the container with id 080fb9efe85e13956d4489a8523ef6b21588e8f16588b91bc928b76f222370cb Mar 18 08:49:47.463878 master-0 kubenswrapper[7620]: I0318 08:49:47.462682 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c"] Mar 18 08:49:47.463878 master-0 kubenswrapper[7620]: E0318 08:49:47.462904 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c228d525-5f89-4e64-bfb4-d4e837adc914" containerName="route-controller-manager" Mar 18 08:49:47.463878 master-0 kubenswrapper[7620]: I0318 08:49:47.462918 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c228d525-5f89-4e64-bfb4-d4e837adc914" containerName="route-controller-manager" Mar 18 08:49:47.463878 master-0 kubenswrapper[7620]: E0318 08:49:47.462927 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5496fa70-0f35-4034-a4bf-1479718a684a" containerName="installer" Mar 18 08:49:47.463878 master-0 kubenswrapper[7620]: I0318 08:49:47.462932 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="5496fa70-0f35-4034-a4bf-1479718a684a" containerName="installer" Mar 18 08:49:47.463878 master-0 kubenswrapper[7620]: I0318 08:49:47.463008 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c228d525-5f89-4e64-bfb4-d4e837adc914" containerName="route-controller-manager" Mar 18 08:49:47.463878 master-0 kubenswrapper[7620]: I0318 08:49:47.463025 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="5496fa70-0f35-4034-a4bf-1479718a684a" containerName="installer" Mar 18 08:49:47.463878 master-0 kubenswrapper[7620]: I0318 08:49:47.463348 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 08:49:47.470015 master-0 kubenswrapper[7620]: I0318 08:49:47.469656 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 08:49:47.470015 master-0 kubenswrapper[7620]: I0318 08:49:47.469920 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 08:49:47.470217 master-0 kubenswrapper[7620]: I0318 08:49:47.470151 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 08:49:47.501868 master-0 kubenswrapper[7620]: I0318 08:49:47.498762 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c"] Mar 18 08:49:47.568361 master-0 kubenswrapper[7620]: I0318 08:49:47.568322 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czm78\" (UniqueName: \"kubernetes.io/projected/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-kube-api-access-czm78\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 08:49:47.568537 master-0 kubenswrapper[7620]: I0318 08:49:47.568365 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 08:49:47.643567 master-0 kubenswrapper[7620]: I0318 08:49:47.643521 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" event={"ID":"7962fb40-1170-4c00-b1bf-92966aeae807","Type":"ContainerStarted","Data":"aa27e210f8e7eef55ac3f091389cc9f04651171a37d624ed4ece6fdb61b6573e"} Mar 18 08:49:47.649791 master-0 kubenswrapper[7620]: I0318 08:49:47.649735 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" event={"ID":"b065df33-7911-456e-b3a2-1f8c8d53e053","Type":"ContainerStarted","Data":"2aa7dcade044551f0842864bab69b07d830728982a90b1c0c52a418a4e62f0b8"} Mar 18 08:49:47.650496 master-0 kubenswrapper[7620]: I0318 08:49:47.650458 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:49:47.651970 master-0 kubenswrapper[7620]: I0318 08:49:47.651933 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" event={"ID":"3d9fe248-ba87-47e3-911a-1b2b112b5683","Type":"ContainerStarted","Data":"6e62204e238d35706c717115d6ba73e907cdd930457e294ee7d633fce4188a54"} Mar 18 08:49:47.652576 master-0 kubenswrapper[7620]: I0318 08:49:47.652542 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:49:47.658979 master-0 kubenswrapper[7620]: I0318 08:49:47.658918 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 08:49:47.661384 master-0 kubenswrapper[7620]: I0318 08:49:47.661345 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 08:49:47.677262 master-0 kubenswrapper[7620]: I0318 08:49:47.677221 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czm78\" (UniqueName: \"kubernetes.io/projected/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-kube-api-access-czm78\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 08:49:47.677491 master-0 kubenswrapper[7620]: I0318 08:49:47.677475 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 08:49:47.683513 master-0 kubenswrapper[7620]: I0318 08:49:47.683477 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 08:49:47.687282 master-0 kubenswrapper[7620]: I0318 08:49:47.687242 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" event={"ID":"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe","Type":"ContainerStarted","Data":"a4d8be3eaea0cde18cce25fc2e7762bfa7a4e08c4813605594a3dbbfbfb560f1"} Mar 18 08:49:47.688397 master-0 kubenswrapper[7620]: I0318 08:49:47.688375 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:49:47.692187 master-0 kubenswrapper[7620]: I0318 08:49:47.690032 7620 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-bcwsv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Mar 18 08:49:47.692187 master-0 kubenswrapper[7620]: I0318 08:49:47.690129 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" podUID="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Mar 18 08:49:47.695991 master-0 kubenswrapper[7620]: I0318 08:49:47.695945 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerStarted","Data":"4dafc20602f992ce987bc0daa5fb9e8da2064678b09669101c81d7012d92df2c"} Mar 18 08:49:47.695991 master-0 kubenswrapper[7620]: I0318 08:49:47.695991 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerStarted","Data":"e63c5c1d709e6609cc982cf30b568c18af00671995969feb6d602b6e7ea5ee6b"} Mar 18 08:49:47.698428 master-0 kubenswrapper[7620]: I0318 08:49:47.698155 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c6fb9336-3f19-4220-93ee-a5a61e26340b","Type":"ContainerStarted","Data":"a0811de98d66913ef78505cbfb268009b3b82b021cf08be06bcac5fba5f9e228"} Mar 18 08:49:47.698428 master-0 kubenswrapper[7620]: I0318 08:49:47.698193 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c6fb9336-3f19-4220-93ee-a5a61e26340b","Type":"ContainerStarted","Data":"1b597f433a55dbc7ccb00fbe5afce037857951640d297dcf4696ad9ed735151f"} Mar 18 08:49:47.704611 master-0 kubenswrapper[7620]: I0318 08:49:47.704560 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czm78\" (UniqueName: \"kubernetes.io/projected/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-kube-api-access-czm78\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 08:49:47.708391 master-0 kubenswrapper[7620]: I0318 08:49:47.705163 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" event={"ID":"59d50dd5-6793-4f96-a769-31e086ecc7e4","Type":"ContainerStarted","Data":"108e2d44432b1f8ee5cc74458c119e11ee59b1743d1ca34a9aa1b362bb8a6018"} Mar 18 08:49:47.708391 master-0 kubenswrapper[7620]: I0318 08:49:47.705736 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:49:47.730005 master-0 kubenswrapper[7620]: I0318 08:49:47.729956 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6x85n" event={"ID":"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29","Type":"ContainerStarted","Data":"5e00240faccff37244d717a7b91afe56fdde9e4b458d2f1df971dcd897fc8ce5"} Mar 18 08:49:47.730005 master-0 kubenswrapper[7620]: I0318 08:49:47.730008 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6x85n" event={"ID":"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29","Type":"ContainerStarted","Data":"b69392c70da956d0cd8607d47e5c288ede34eedec24437124e38e4472d38c2a0"} Mar 18 08:49:47.732469 master-0 kubenswrapper[7620]: I0318 08:49:47.732405 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" event={"ID":"e025d334-20e7-491f-8027-194251398747","Type":"ContainerStarted","Data":"cab1e70eaf756322d179eef00fda4ba1c27960c2e8fba2c581e8884e2a4da381"} Mar 18 08:49:47.732469 master-0 kubenswrapper[7620]: I0318 08:49:47.732458 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" event={"ID":"e025d334-20e7-491f-8027-194251398747","Type":"ContainerStarted","Data":"569e9aaa10b7d768409e94cbeed3986dd02b83ea0a50d3a199b3e15f766deaab"} Mar 18 08:49:47.734341 master-0 kubenswrapper[7620]: I0318 08:49:47.734300 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" event={"ID":"e7b72267-fc08-41ed-a92b-9fca7372aba6","Type":"ContainerStarted","Data":"9507f9ce9cddc69dc0eb12d66a8b7b2c49a5c83c0cf0c2bcb7ae44778f1d5051"} Mar 18 08:49:47.743252 master-0 kubenswrapper[7620]: I0318 08:49:47.743190 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" event={"ID":"159a26f5-3cfc-4db2-88e9-bff5d8a613fc","Type":"ContainerStarted","Data":"864587bb9e1c050127a06a72af052047508fc19256a176a3926da44e091eec45"} Mar 18 08:49:47.743252 master-0 kubenswrapper[7620]: I0318 08:49:47.743246 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" event={"ID":"159a26f5-3cfc-4db2-88e9-bff5d8a613fc","Type":"ContainerStarted","Data":"5b25a8863a8b00bc7ec87b8ae1e2369b0a538d5870570f98557275e350c88a96"} Mar 18 08:49:47.757617 master-0 kubenswrapper[7620]: I0318 08:49:47.757092 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" event={"ID":"56715c8c-c4dd-4912-b955-607a312bfcb6","Type":"ContainerStarted","Data":"0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05"} Mar 18 08:49:47.757617 master-0 kubenswrapper[7620]: I0318 08:49:47.757252 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" podUID="56715c8c-c4dd-4912-b955-607a312bfcb6" containerName="controller-manager" containerID="cri-o://0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05" gracePeriod=30 Mar 18 08:49:47.757617 master-0 kubenswrapper[7620]: I0318 08:49:47.757583 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:47.764556 master-0 kubenswrapper[7620]: I0318 08:49:47.764509 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"ace4267e-c38d-46dd-9de6-c23339729a8b","Type":"ContainerStarted","Data":"c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c"} Mar 18 08:49:47.764807 master-0 kubenswrapper[7620]: I0318 08:49:47.764790 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"ace4267e-c38d-46dd-9de6-c23339729a8b","Type":"ContainerStarted","Data":"080fb9efe85e13956d4489a8523ef6b21588e8f16588b91bc928b76f222370cb"} Mar 18 08:49:47.787807 master-0 kubenswrapper[7620]: I0318 08:49:47.783342 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:47.823301 master-0 kubenswrapper[7620]: I0318 08:49:47.821793 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 08:49:48.048669 master-0 kubenswrapper[7620]: I0318 08:49:48.048509 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=8.048488417 podStartE2EDuration="8.048488417s" podCreationTimestamp="2026-03-18 08:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:48.007826019 +0000 UTC m=+52.002607771" watchObservedRunningTime="2026-03-18 08:49:48.048488417 +0000 UTC m=+52.043270169" Mar 18 08:49:48.103263 master-0 kubenswrapper[7620]: I0318 08:49:48.102510 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ck7b5"] Mar 18 08:49:48.106868 master-0 kubenswrapper[7620]: I0318 08:49:48.105590 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.125533 master-0 kubenswrapper[7620]: I0318 08:49:48.123350 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 08:49:48.125533 master-0 kubenswrapper[7620]: I0318 08:49:48.123630 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 08:49:48.125533 master-0 kubenswrapper[7620]: I0318 08:49:48.123659 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 08:49:48.125533 master-0 kubenswrapper[7620]: I0318 08:49:48.123762 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 08:49:48.125533 master-0 kubenswrapper[7620]: I0318 08:49:48.125169 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" podStartSLOduration=9.166932494 podStartE2EDuration="23.125145809s" podCreationTimestamp="2026-03-18 08:49:25 +0000 UTC" firstStartedPulling="2026-03-18 08:49:32.354043507 +0000 UTC m=+36.348825259" lastFinishedPulling="2026-03-18 08:49:46.312256812 +0000 UTC m=+50.307038574" observedRunningTime="2026-03-18 08:49:48.123894502 +0000 UTC m=+52.118676274" watchObservedRunningTime="2026-03-18 08:49:48.125145809 +0000 UTC m=+52.119927561" Mar 18 08:49:48.135530 master-0 kubenswrapper[7620]: I0318 08:49:48.135449 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ck7b5"] Mar 18 08:49:48.191708 master-0 kubenswrapper[7620]: I0318 08:49:48.191646 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.191992 master-0 kubenswrapper[7620]: I0318 08:49:48.191744 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp77s\" (UniqueName: \"kubernetes.io/projected/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-kube-api-access-tp77s\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.191992 master-0 kubenswrapper[7620]: I0318 08:49:48.191776 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.202583 master-0 kubenswrapper[7620]: I0318 08:49:48.202307 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c"] Mar 18 08:49:48.236637 master-0 kubenswrapper[7620]: I0318 08:49:48.232101 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5496fa70-0f35-4034-a4bf-1479718a684a" path="/var/lib/kubelet/pods/5496fa70-0f35-4034-a4bf-1479718a684a/volumes" Mar 18 08:49:48.236637 master-0 kubenswrapper[7620]: I0318 08:49:48.232631 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c228d525-5f89-4e64-bfb4-d4e837adc914" path="/var/lib/kubelet/pods/c228d525-5f89-4e64-bfb4-d4e837adc914/volumes" Mar 18 08:49:48.258488 master-0 kubenswrapper[7620]: I0318 08:49:48.258413 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=9.25838677 podStartE2EDuration="9.25838677s" podCreationTimestamp="2026-03-18 08:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:48.239822432 +0000 UTC m=+52.234604204" watchObservedRunningTime="2026-03-18 08:49:48.25838677 +0000 UTC m=+52.253168522" Mar 18 08:49:48.292895 master-0 kubenswrapper[7620]: I0318 08:49:48.292430 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.292895 master-0 kubenswrapper[7620]: I0318 08:49:48.292510 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp77s\" (UniqueName: \"kubernetes.io/projected/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-kube-api-access-tp77s\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.292895 master-0 kubenswrapper[7620]: I0318 08:49:48.292535 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.293589 master-0 kubenswrapper[7620]: I0318 08:49:48.293212 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.293589 master-0 kubenswrapper[7620]: I0318 08:49:48.293215 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xfq8l"] Mar 18 08:49:48.293589 master-0 kubenswrapper[7620]: E0318 08:49:48.293313 7620 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 18 08:49:48.293589 master-0 kubenswrapper[7620]: E0318 08:49:48.293362 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls podName:b35ab145-16a7-4ef1-86e8-0afb6ff469fd nodeName:}" failed. No retries permitted until 2026-03-18 08:49:48.793344293 +0000 UTC m=+52.788126045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls") pod "dns-default-ck7b5" (UID: "b35ab145-16a7-4ef1-86e8-0afb6ff469fd") : secret "dns-default-metrics-tls" not found Mar 18 08:49:48.294157 master-0 kubenswrapper[7620]: I0318 08:49:48.294133 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.307225 master-0 kubenswrapper[7620]: I0318 08:49:48.305160 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6m4q6"] Mar 18 08:49:48.314226 master-0 kubenswrapper[7620]: I0318 08:49:48.314178 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.321381 master-0 kubenswrapper[7620]: I0318 08:49:48.316977 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfq8l"] Mar 18 08:49:48.341283 master-0 kubenswrapper[7620]: I0318 08:49:48.341256 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp77s\" (UniqueName: \"kubernetes.io/projected/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-kube-api-access-tp77s\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.347381 master-0 kubenswrapper[7620]: I0318 08:49:48.347355 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m4q6"] Mar 18 08:49:48.397997 master-0 kubenswrapper[7620]: I0318 08:49:48.393670 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:48.398940 master-0 kubenswrapper[7620]: I0318 08:49:48.398807 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb6ns\" (UniqueName: \"kubernetes.io/projected/833eeb49-a463-432a-a684-a27c66ecae7d-kube-api-access-gb6ns\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.411564 master-0 kubenswrapper[7620]: I0318 08:49:48.406394 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-utilities\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.411564 master-0 kubenswrapper[7620]: I0318 08:49:48.406455 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-utilities\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.411564 master-0 kubenswrapper[7620]: I0318 08:49:48.406490 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-catalog-content\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.411564 master-0 kubenswrapper[7620]: I0318 08:49:48.406552 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55mv\" (UniqueName: \"kubernetes.io/projected/95843eb5-33bc-48e8-afc4-a0bd8c524e24-kube-api-access-j55mv\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.411564 master-0 kubenswrapper[7620]: I0318 08:49:48.406616 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-catalog-content\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.507458 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-proxy-ca-bundles\") pod \"56715c8c-c4dd-4912-b955-607a312bfcb6\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.507545 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-client-ca\") pod \"56715c8c-c4dd-4912-b955-607a312bfcb6\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.507612 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56715c8c-c4dd-4912-b955-607a312bfcb6-serving-cert\") pod \"56715c8c-c4dd-4912-b955-607a312bfcb6\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.507653 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-config\") pod \"56715c8c-c4dd-4912-b955-607a312bfcb6\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.507699 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbkgl\" (UniqueName: \"kubernetes.io/projected/56715c8c-c4dd-4912-b955-607a312bfcb6-kube-api-access-xbkgl\") pod \"56715c8c-c4dd-4912-b955-607a312bfcb6\" (UID: \"56715c8c-c4dd-4912-b955-607a312bfcb6\") " Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.507934 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-catalog-content\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.507972 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb6ns\" (UniqueName: \"kubernetes.io/projected/833eeb49-a463-432a-a684-a27c66ecae7d-kube-api-access-gb6ns\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.508015 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-utilities\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.508068 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-utilities\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.508090 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-catalog-content\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.508128 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55mv\" (UniqueName: \"kubernetes.io/projected/95843eb5-33bc-48e8-afc4-a0bd8c524e24-kube-api-access-j55mv\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.509259 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "56715c8c-c4dd-4912-b955-607a312bfcb6" (UID: "56715c8c-c4dd-4912-b955-607a312bfcb6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:48.528781 master-0 kubenswrapper[7620]: I0318 08:49:48.509601 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-client-ca" (OuterVolumeSpecName: "client-ca") pod "56715c8c-c4dd-4912-b955-607a312bfcb6" (UID: "56715c8c-c4dd-4912-b955-607a312bfcb6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:48.529826 master-0 kubenswrapper[7620]: I0318 08:49:48.529521 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-catalog-content\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.529826 master-0 kubenswrapper[7620]: I0318 08:49:48.529581 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-config" (OuterVolumeSpecName: "config") pod "56715c8c-c4dd-4912-b955-607a312bfcb6" (UID: "56715c8c-c4dd-4912-b955-607a312bfcb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:48.530334 master-0 kubenswrapper[7620]: I0318 08:49:48.530079 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-utilities\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.530334 master-0 kubenswrapper[7620]: I0318 08:49:48.530160 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-utilities\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.532930 master-0 kubenswrapper[7620]: I0318 08:49:48.530989 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-catalog-content\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.545768 master-0 kubenswrapper[7620]: I0318 08:49:48.533963 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56715c8c-c4dd-4912-b955-607a312bfcb6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "56715c8c-c4dd-4912-b955-607a312bfcb6" (UID: "56715c8c-c4dd-4912-b955-607a312bfcb6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:48.545768 master-0 kubenswrapper[7620]: I0318 08:49:48.541970 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56715c8c-c4dd-4912-b955-607a312bfcb6-kube-api-access-xbkgl" (OuterVolumeSpecName: "kube-api-access-xbkgl") pod "56715c8c-c4dd-4912-b955-607a312bfcb6" (UID: "56715c8c-c4dd-4912-b955-607a312bfcb6"). InnerVolumeSpecName "kube-api-access-xbkgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:48.568710 master-0 kubenswrapper[7620]: I0318 08:49:48.566831 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55mv\" (UniqueName: \"kubernetes.io/projected/95843eb5-33bc-48e8-afc4-a0bd8c524e24-kube-api-access-j55mv\") pod \"community-operators-xfq8l\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.568710 master-0 kubenswrapper[7620]: I0318 08:49:48.568575 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb6ns\" (UniqueName: \"kubernetes.io/projected/833eeb49-a463-432a-a684-a27c66ecae7d-kube-api-access-gb6ns\") pod \"redhat-marketplace-6m4q6\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.610048 master-0 kubenswrapper[7620]: I0318 08:49:48.609988 7620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:48.610048 master-0 kubenswrapper[7620]: I0318 08:49:48.610036 7620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:48.610048 master-0 kubenswrapper[7620]: I0318 08:49:48.610048 7620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56715c8c-c4dd-4912-b955-607a312bfcb6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:48.610048 master-0 kubenswrapper[7620]: I0318 08:49:48.610061 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56715c8c-c4dd-4912-b955-607a312bfcb6-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:48.610337 master-0 kubenswrapper[7620]: I0318 08:49:48.610074 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbkgl\" (UniqueName: \"kubernetes.io/projected/56715c8c-c4dd-4912-b955-607a312bfcb6-kube-api-access-xbkgl\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:48.635747 master-0 kubenswrapper[7620]: I0318 08:49:48.635683 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:49:48.678522 master-0 kubenswrapper[7620]: I0318 08:49:48.678461 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:49:48.775112 master-0 kubenswrapper[7620]: I0318 08:49:48.774235 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zwl77"] Mar 18 08:49:48.775112 master-0 kubenswrapper[7620]: E0318 08:49:48.774436 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56715c8c-c4dd-4912-b955-607a312bfcb6" containerName="controller-manager" Mar 18 08:49:48.775112 master-0 kubenswrapper[7620]: I0318 08:49:48.774450 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="56715c8c-c4dd-4912-b955-607a312bfcb6" containerName="controller-manager" Mar 18 08:49:48.775112 master-0 kubenswrapper[7620]: I0318 08:49:48.774525 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="56715c8c-c4dd-4912-b955-607a312bfcb6" containerName="controller-manager" Mar 18 08:49:48.775112 master-0 kubenswrapper[7620]: I0318 08:49:48.774835 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zwl77" Mar 18 08:49:48.779938 master-0 kubenswrapper[7620]: I0318 08:49:48.777303 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" event={"ID":"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf","Type":"ContainerStarted","Data":"c1eb0a6c1ab17257358eeeb97010b410797c8ba9fd08a44d4ff2e76c51c917e0"} Mar 18 08:49:48.794263 master-0 kubenswrapper[7620]: I0318 08:49:48.794164 7620 generic.go:334] "Generic (PLEG): container finished" podID="56715c8c-c4dd-4912-b955-607a312bfcb6" containerID="0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05" exitCode=0 Mar 18 08:49:48.794381 master-0 kubenswrapper[7620]: I0318 08:49:48.794280 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" Mar 18 08:49:48.794381 master-0 kubenswrapper[7620]: I0318 08:49:48.794355 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" event={"ID":"56715c8c-c4dd-4912-b955-607a312bfcb6","Type":"ContainerDied","Data":"0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05"} Mar 18 08:49:48.794558 master-0 kubenswrapper[7620]: I0318 08:49:48.794410 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856b445d89-8cfpd" event={"ID":"56715c8c-c4dd-4912-b955-607a312bfcb6","Type":"ContainerDied","Data":"927ee5cc9486163ad344533e90c42ad0962670bace804e14c58b10d4a343dc45"} Mar 18 08:49:48.794558 master-0 kubenswrapper[7620]: I0318 08:49:48.794446 7620 scope.go:117] "RemoveContainer" containerID="0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05" Mar 18 08:49:48.826785 master-0 kubenswrapper[7620]: I0318 08:49:48.825382 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.827023 master-0 kubenswrapper[7620]: I0318 08:49:48.826981 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:49:48.829870 master-0 kubenswrapper[7620]: I0318 08:49:48.829794 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:48.848765 master-0 kubenswrapper[7620]: I0318 08:49:48.848716 7620 scope.go:117] "RemoveContainer" containerID="0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05" Mar 18 08:49:48.849414 master-0 kubenswrapper[7620]: E0318 08:49:48.849359 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05\": container with ID starting with 0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05 not found: ID does not exist" containerID="0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05" Mar 18 08:49:48.849480 master-0 kubenswrapper[7620]: I0318 08:49:48.849414 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05"} err="failed to get container status \"0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05\": rpc error: code = NotFound desc = could not find container \"0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05\": container with ID starting with 0de67e7c450e6bb78e07ecfe24fccfb95983018f3ba97a02e000474cebf52a05 not found: ID does not exist" Mar 18 08:49:48.890060 master-0 kubenswrapper[7620]: I0318 08:49:48.889802 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856b445d89-8cfpd"] Mar 18 08:49:48.891584 master-0 kubenswrapper[7620]: I0318 08:49:48.891463 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-856b445d89-8cfpd"] Mar 18 08:49:48.927669 master-0 kubenswrapper[7620]: I0318 08:49:48.927433 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68465463-5f2a-4e74-9c34-2706a185f7ea-hosts-file\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 08:49:48.927669 master-0 kubenswrapper[7620]: I0318 08:49:48.927535 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqlhh\" (UniqueName: \"kubernetes.io/projected/68465463-5f2a-4e74-9c34-2706a185f7ea-kube-api-access-gqlhh\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 08:49:49.029579 master-0 kubenswrapper[7620]: I0318 08:49:49.029515 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqlhh\" (UniqueName: \"kubernetes.io/projected/68465463-5f2a-4e74-9c34-2706a185f7ea-kube-api-access-gqlhh\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 08:49:49.029843 master-0 kubenswrapper[7620]: I0318 08:49:49.029659 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68465463-5f2a-4e74-9c34-2706a185f7ea-hosts-file\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 08:49:49.029843 master-0 kubenswrapper[7620]: I0318 08:49:49.029785 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68465463-5f2a-4e74-9c34-2706a185f7ea-hosts-file\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 08:49:49.048978 master-0 kubenswrapper[7620]: I0318 08:49:49.048552 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqlhh\" (UniqueName: \"kubernetes.io/projected/68465463-5f2a-4e74-9c34-2706a185f7ea-kube-api-access-gqlhh\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 08:49:49.060164 master-0 kubenswrapper[7620]: I0318 08:49:49.060100 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ck7b5" Mar 18 08:49:49.080724 master-0 kubenswrapper[7620]: I0318 08:49:49.078601 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfq8l"] Mar 18 08:49:49.138788 master-0 kubenswrapper[7620]: I0318 08:49:49.138682 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zwl77" Mar 18 08:49:49.145751 master-0 kubenswrapper[7620]: I0318 08:49:49.145689 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m4q6"] Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.184324 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp"] Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.186265 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.189206 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6448dc88d8-cnd9q"] Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.189948 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.190084 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.190099 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.190169 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.190177 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.190369 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.197447 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.197547 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.197713 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.198282 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.198323 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.202757 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: W0318 08:49:49.206398 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833eeb49_a463_432a_a684_a27c66ecae7d.slice/crio-2faccda4af6f07d470c0a6a5d3b97da84b97a7597f4e71f78d12a05ba633ee32 WatchSource:0}: Error finding container 2faccda4af6f07d470c0a6a5d3b97da84b97a7597f4e71f78d12a05ba633ee32: Status 404 returned error can't find the container with id 2faccda4af6f07d470c0a6a5d3b97da84b97a7597f4e71f78d12a05ba633ee32 Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: W0318 08:49:49.206896 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68465463_5f2a_4e74_9c34_2706a185f7ea.slice/crio-156dd659cded87fed4f4d9c1948aa273d3ce5df8a947527d51220517f67ececc WatchSource:0}: Error finding container 156dd659cded87fed4f4d9c1948aa273d3ce5df8a947527d51220517f67ececc: Status 404 returned error can't find the container with id 156dd659cded87fed4f4d9c1948aa273d3ce5df8a947527d51220517f67ececc Mar 18 08:49:49.216045 master-0 kubenswrapper[7620]: I0318 08:49:49.214794 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp"] Mar 18 08:49:49.228127 master-0 kubenswrapper[7620]: I0318 08:49:49.227455 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6448dc88d8-cnd9q"] Mar 18 08:49:49.336963 master-0 kubenswrapper[7620]: I0318 08:49:49.336438 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m5wf\" (UniqueName: \"kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.336963 master-0 kubenswrapper[7620]: I0318 08:49:49.336507 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwsfl\" (UniqueName: \"kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.336963 master-0 kubenswrapper[7620]: I0318 08:49:49.336570 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.336963 master-0 kubenswrapper[7620]: I0318 08:49:49.336597 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.336963 master-0 kubenswrapper[7620]: I0318 08:49:49.336628 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.340211 master-0 kubenswrapper[7620]: I0318 08:49:49.337569 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.340211 master-0 kubenswrapper[7620]: I0318 08:49:49.337606 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.340211 master-0 kubenswrapper[7620]: I0318 08:49:49.337706 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.340211 master-0 kubenswrapper[7620]: I0318 08:49:49.337747 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.386112 master-0 kubenswrapper[7620]: I0318 08:49:49.386022 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ck7b5"] Mar 18 08:49:49.410783 master-0 kubenswrapper[7620]: W0318 08:49:49.410738 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb35ab145_16a7_4ef1_86e8_0afb6ff469fd.slice/crio-0cdcdcd2ccccdebd6503233827667ed7ce6f4654db0dc10c48bcf238245e2d46 WatchSource:0}: Error finding container 0cdcdcd2ccccdebd6503233827667ed7ce6f4654db0dc10c48bcf238245e2d46: Status 404 returned error can't find the container with id 0cdcdcd2ccccdebd6503233827667ed7ce6f4654db0dc10c48bcf238245e2d46 Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439561 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439614 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439647 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m5wf\" (UniqueName: \"kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439667 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwsfl\" (UniqueName: \"kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439689 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439708 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439725 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439755 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.439774 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.440908 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.443205 master-0 kubenswrapper[7620]: I0318 08:49:49.442962 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.443584 master-0 kubenswrapper[7620]: I0318 08:49:49.443219 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.444714 master-0 kubenswrapper[7620]: I0318 08:49:49.443907 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.444714 master-0 kubenswrapper[7620]: I0318 08:49:49.444651 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.449532 master-0 kubenswrapper[7620]: I0318 08:49:49.446813 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.463802 master-0 kubenswrapper[7620]: I0318 08:49:49.458775 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m5wf\" (UniqueName: \"kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.463802 master-0 kubenswrapper[7620]: I0318 08:49:49.459446 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.470634 master-0 kubenswrapper[7620]: I0318 08:49:49.470598 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwsfl\" (UniqueName: \"kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.549902 master-0 kubenswrapper[7620]: I0318 08:49:49.548208 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:49.593065 master-0 kubenswrapper[7620]: I0318 08:49:49.573196 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:49.700257 master-0 kubenswrapper[7620]: I0318 08:49:49.700216 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ffks8"] Mar 18 08:49:49.701213 master-0 kubenswrapper[7620]: I0318 08:49:49.701191 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.732679 master-0 kubenswrapper[7620]: I0318 08:49:49.732590 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ffks8"] Mar 18 08:49:49.810380 master-0 kubenswrapper[7620]: I0318 08:49:49.810289 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ck7b5" event={"ID":"b35ab145-16a7-4ef1-86e8-0afb6ff469fd","Type":"ContainerStarted","Data":"0cdcdcd2ccccdebd6503233827667ed7ce6f4654db0dc10c48bcf238245e2d46"} Mar 18 08:49:49.812919 master-0 kubenswrapper[7620]: I0318 08:49:49.812872 7620 generic.go:334] "Generic (PLEG): container finished" podID="833eeb49-a463-432a-a684-a27c66ecae7d" containerID="f950fd1dfcd2c46d560ce00f1e2b44e70601dab057e70cf84c3cbc718a9920c0" exitCode=0 Mar 18 08:49:49.812996 master-0 kubenswrapper[7620]: I0318 08:49:49.812967 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m4q6" event={"ID":"833eeb49-a463-432a-a684-a27c66ecae7d","Type":"ContainerDied","Data":"f950fd1dfcd2c46d560ce00f1e2b44e70601dab057e70cf84c3cbc718a9920c0"} Mar 18 08:49:49.813050 master-0 kubenswrapper[7620]: I0318 08:49:49.813018 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m4q6" event={"ID":"833eeb49-a463-432a-a684-a27c66ecae7d","Type":"ContainerStarted","Data":"2faccda4af6f07d470c0a6a5d3b97da84b97a7597f4e71f78d12a05ba633ee32"} Mar 18 08:49:49.817752 master-0 kubenswrapper[7620]: I0318 08:49:49.817715 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zwl77" event={"ID":"68465463-5f2a-4e74-9c34-2706a185f7ea","Type":"ContainerStarted","Data":"74894e42744c1e1222eeb630320365e192ff36ba2813c6f644b991266bbb74f7"} Mar 18 08:49:49.817806 master-0 kubenswrapper[7620]: I0318 08:49:49.817755 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zwl77" event={"ID":"68465463-5f2a-4e74-9c34-2706a185f7ea","Type":"ContainerStarted","Data":"156dd659cded87fed4f4d9c1948aa273d3ce5df8a947527d51220517f67ececc"} Mar 18 08:49:49.820282 master-0 kubenswrapper[7620]: I0318 08:49:49.820246 7620 generic.go:334] "Generic (PLEG): container finished" podID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerID="6db6cc72dff8a4c58675032fad1afd316f02d7468d346af6104e95e0c8d8fce4" exitCode=0 Mar 18 08:49:49.821204 master-0 kubenswrapper[7620]: I0318 08:49:49.821127 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfq8l" event={"ID":"95843eb5-33bc-48e8-afc4-a0bd8c524e24","Type":"ContainerDied","Data":"6db6cc72dff8a4c58675032fad1afd316f02d7468d346af6104e95e0c8d8fce4"} Mar 18 08:49:49.821245 master-0 kubenswrapper[7620]: I0318 08:49:49.821231 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfq8l" event={"ID":"95843eb5-33bc-48e8-afc4-a0bd8c524e24","Type":"ContainerStarted","Data":"8cf6cb239318c19f00c4b102b3d88701d2d35a1bad35017ce524b3c32233b02f"} Mar 18 08:49:49.846998 master-0 kubenswrapper[7620]: I0318 08:49:49.846844 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-catalog-content\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.846998 master-0 kubenswrapper[7620]: I0318 08:49:49.846987 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-utilities\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.847116 master-0 kubenswrapper[7620]: I0318 08:49:49.847018 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qpdl\" (UniqueName: \"kubernetes.io/projected/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-kube-api-access-8qpdl\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.883048 master-0 kubenswrapper[7620]: I0318 08:49:49.882992 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zwl77" podStartSLOduration=1.882969288 podStartE2EDuration="1.882969288s" podCreationTimestamp="2026-03-18 08:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:49.853889965 +0000 UTC m=+53.848671727" watchObservedRunningTime="2026-03-18 08:49:49.882969288 +0000 UTC m=+53.877751040" Mar 18 08:49:49.895010 master-0 kubenswrapper[7620]: I0318 08:49:49.894976 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f"] Mar 18 08:49:49.896647 master-0 kubenswrapper[7620]: I0318 08:49:49.896507 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:49.905433 master-0 kubenswrapper[7620]: I0318 08:49:49.905279 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 08:49:49.909604 master-0 kubenswrapper[7620]: I0318 08:49:49.907675 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f"] Mar 18 08:49:49.949159 master-0 kubenswrapper[7620]: I0318 08:49:49.949001 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-utilities\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.949159 master-0 kubenswrapper[7620]: I0318 08:49:49.949068 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qpdl\" (UniqueName: \"kubernetes.io/projected/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-kube-api-access-8qpdl\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.949412 master-0 kubenswrapper[7620]: I0318 08:49:49.949206 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-catalog-content\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.950171 master-0 kubenswrapper[7620]: I0318 08:49:49.950122 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-utilities\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.952302 master-0 kubenswrapper[7620]: I0318 08:49:49.952261 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-catalog-content\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:49.970695 master-0 kubenswrapper[7620]: I0318 08:49:49.970635 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qpdl\" (UniqueName: \"kubernetes.io/projected/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-kube-api-access-8qpdl\") pod \"redhat-operators-ffks8\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:50.024173 master-0 kubenswrapper[7620]: I0318 08:49:50.024103 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:49:50.044010 master-0 kubenswrapper[7620]: I0318 08:49:50.042405 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp"] Mar 18 08:49:50.051070 master-0 kubenswrapper[7620]: I0318 08:49:50.051017 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-tmpfs\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.051144 master-0 kubenswrapper[7620]: I0318 08:49:50.051092 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.051314 master-0 kubenswrapper[7620]: I0318 08:49:50.051266 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.051423 master-0 kubenswrapper[7620]: I0318 08:49:50.051398 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjq4w\" (UniqueName: \"kubernetes.io/projected/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-kube-api-access-gjq4w\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.056962 master-0 kubenswrapper[7620]: W0318 08:49:50.056915 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04e23989_853e_4b49_ba0f_1961d64ae3a3.slice/crio-35bb7224fe9eca618f0100241589daaf5b90ad54413934d086e067f2a229eae2 WatchSource:0}: Error finding container 35bb7224fe9eca618f0100241589daaf5b90ad54413934d086e067f2a229eae2: Status 404 returned error can't find the container with id 35bb7224fe9eca618f0100241589daaf5b90ad54413934d086e067f2a229eae2 Mar 18 08:49:50.124502 master-0 kubenswrapper[7620]: I0318 08:49:50.124453 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6448dc88d8-cnd9q"] Mar 18 08:49:50.154637 master-0 kubenswrapper[7620]: I0318 08:49:50.154527 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.154637 master-0 kubenswrapper[7620]: I0318 08:49:50.154583 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.154637 master-0 kubenswrapper[7620]: I0318 08:49:50.154618 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjq4w\" (UniqueName: \"kubernetes.io/projected/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-kube-api-access-gjq4w\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.155766 master-0 kubenswrapper[7620]: I0318 08:49:50.155568 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-tmpfs\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.157407 master-0 kubenswrapper[7620]: I0318 08:49:50.156940 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-tmpfs\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.158925 master-0 kubenswrapper[7620]: I0318 08:49:50.158898 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.159364 master-0 kubenswrapper[7620]: I0318 08:49:50.159326 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.170972 master-0 kubenswrapper[7620]: I0318 08:49:50.170900 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjq4w\" (UniqueName: \"kubernetes.io/projected/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-kube-api-access-gjq4w\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.234686 master-0 kubenswrapper[7620]: I0318 08:49:50.234642 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:50.244237 master-0 kubenswrapper[7620]: I0318 08:49:50.244197 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56715c8c-c4dd-4912-b955-607a312bfcb6" path="/var/lib/kubelet/pods/56715c8c-c4dd-4912-b955-607a312bfcb6/volumes" Mar 18 08:49:50.244932 master-0 kubenswrapper[7620]: I0318 08:49:50.244908 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ffks8"] Mar 18 08:49:50.260798 master-0 kubenswrapper[7620]: W0318 08:49:50.260731 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5c7ffb1_a1ab_4ca1_bdae_bcb09a759591.slice/crio-5b0a9cb3c6ea40ca8b169ff889d974944c80451d88a25b4f11d65fd85e8f1627 WatchSource:0}: Error finding container 5b0a9cb3c6ea40ca8b169ff889d974944c80451d88a25b4f11d65fd85e8f1627: Status 404 returned error can't find the container with id 5b0a9cb3c6ea40ca8b169ff889d974944c80451d88a25b4f11d65fd85e8f1627 Mar 18 08:49:50.690058 master-0 kubenswrapper[7620]: I0318 08:49:50.689936 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f"] Mar 18 08:49:50.833862 master-0 kubenswrapper[7620]: I0318 08:49:50.833780 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffks8" event={"ID":"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591","Type":"ContainerStarted","Data":"8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7"} Mar 18 08:49:50.833862 master-0 kubenswrapper[7620]: I0318 08:49:50.833835 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffks8" event={"ID":"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591","Type":"ContainerStarted","Data":"5b0a9cb3c6ea40ca8b169ff889d974944c80451d88a25b4f11d65fd85e8f1627"} Mar 18 08:49:50.853826 master-0 kubenswrapper[7620]: I0318 08:49:50.853172 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" event={"ID":"04e23989-853e-4b49-ba0f-1961d64ae3a3","Type":"ContainerStarted","Data":"750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc"} Mar 18 08:49:50.853826 master-0 kubenswrapper[7620]: I0318 08:49:50.853230 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" event={"ID":"04e23989-853e-4b49-ba0f-1961d64ae3a3","Type":"ContainerStarted","Data":"35bb7224fe9eca618f0100241589daaf5b90ad54413934d086e067f2a229eae2"} Mar 18 08:49:50.854982 master-0 kubenswrapper[7620]: I0318 08:49:50.854272 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:50.860656 master-0 kubenswrapper[7620]: I0318 08:49:50.860549 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" event={"ID":"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75","Type":"ContainerStarted","Data":"f95c3ae9a15c386971b5456139d5edf2668059a7f470b16505d0edd6a91106f8"} Mar 18 08:49:50.860801 master-0 kubenswrapper[7620]: I0318 08:49:50.860758 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" event={"ID":"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75","Type":"ContainerStarted","Data":"d52b6a2cf90645c7d7adbd4e26631b5105d0e2c63496bcbe09fc57752e328d79"} Mar 18 08:49:50.861430 master-0 kubenswrapper[7620]: I0318 08:49:50.861408 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:50.861471 master-0 kubenswrapper[7620]: I0318 08:49:50.861460 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 08:49:50.874939 master-0 kubenswrapper[7620]: I0318 08:49:50.874483 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:49:50.891119 master-0 kubenswrapper[7620]: I0318 08:49:50.889179 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vgplg"] Mar 18 08:49:50.891119 master-0 kubenswrapper[7620]: I0318 08:49:50.890163 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:50.905072 master-0 kubenswrapper[7620]: I0318 08:49:50.903775 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" podStartSLOduration=10.90375862 podStartE2EDuration="10.90375862s" podCreationTimestamp="2026-03-18 08:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:50.902347489 +0000 UTC m=+54.897129251" watchObservedRunningTime="2026-03-18 08:49:50.90375862 +0000 UTC m=+54.898540372" Mar 18 08:49:50.909871 master-0 kubenswrapper[7620]: I0318 08:49:50.907591 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgplg"] Mar 18 08:49:50.929642 master-0 kubenswrapper[7620]: I0318 08:49:50.929566 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" podStartSLOduration=10.929544676999999 podStartE2EDuration="10.929544677s" podCreationTimestamp="2026-03-18 08:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:50.928782135 +0000 UTC m=+54.923563887" watchObservedRunningTime="2026-03-18 08:49:50.929544677 +0000 UTC m=+54.924326429" Mar 18 08:49:50.968256 master-0 kubenswrapper[7620]: I0318 08:49:50.968145 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-catalog-content\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:50.968256 master-0 kubenswrapper[7620]: I0318 08:49:50.968203 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-utilities\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:50.968503 master-0 kubenswrapper[7620]: I0318 08:49:50.968284 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl6qt\" (UniqueName: \"kubernetes.io/projected/d72cacbe-f050-4b00-b20d-6e3c800db5e3-kube-api-access-pl6qt\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:51.072534 master-0 kubenswrapper[7620]: I0318 08:49:51.072447 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl6qt\" (UniqueName: \"kubernetes.io/projected/d72cacbe-f050-4b00-b20d-6e3c800db5e3-kube-api-access-pl6qt\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:51.072798 master-0 kubenswrapper[7620]: I0318 08:49:51.072585 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-catalog-content\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:51.072798 master-0 kubenswrapper[7620]: I0318 08:49:51.072788 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-utilities\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:51.074313 master-0 kubenswrapper[7620]: I0318 08:49:51.074291 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-utilities\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:51.074552 master-0 kubenswrapper[7620]: I0318 08:49:51.074504 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-catalog-content\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:51.113883 master-0 kubenswrapper[7620]: I0318 08:49:51.113573 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl6qt\" (UniqueName: \"kubernetes.io/projected/d72cacbe-f050-4b00-b20d-6e3c800db5e3-kube-api-access-pl6qt\") pod \"certified-operators-vgplg\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:51.229299 master-0 kubenswrapper[7620]: I0318 08:49:51.229112 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:49:52.239926 master-0 kubenswrapper[7620]: W0318 08:49:52.239842 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1794b726_5c0d_4a72_8ddd_418a2cbd8ded.slice/crio-21254471a19094b73e6733114f96329319386cc402e4cbd645f5a024b798fc80 WatchSource:0}: Error finding container 21254471a19094b73e6733114f96329319386cc402e4cbd645f5a024b798fc80: Status 404 returned error can't find the container with id 21254471a19094b73e6733114f96329319386cc402e4cbd645f5a024b798fc80 Mar 18 08:49:52.873567 master-0 kubenswrapper[7620]: I0318 08:49:52.873482 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" event={"ID":"1794b726-5c0d-4a72-8ddd-418a2cbd8ded","Type":"ContainerStarted","Data":"87a768540de89682a72038e0744fee798fed46fc8d4ed2677498fd357d6e4051"} Mar 18 08:49:52.873996 master-0 kubenswrapper[7620]: I0318 08:49:52.873976 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" event={"ID":"1794b726-5c0d-4a72-8ddd-418a2cbd8ded","Type":"ContainerStarted","Data":"21254471a19094b73e6733114f96329319386cc402e4cbd645f5a024b798fc80"} Mar 18 08:49:52.874969 master-0 kubenswrapper[7620]: I0318 08:49:52.874718 7620 generic.go:334] "Generic (PLEG): container finished" podID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerID="8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7" exitCode=0 Mar 18 08:49:52.874969 master-0 kubenswrapper[7620]: I0318 08:49:52.874789 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffks8" event={"ID":"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591","Type":"ContainerDied","Data":"8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7"} Mar 18 08:49:53.883053 master-0 kubenswrapper[7620]: I0318 08:49:53.882429 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:49:54.464805 master-0 kubenswrapper[7620]: I0318 08:49:54.464026 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69"] Mar 18 08:49:54.465581 master-0 kubenswrapper[7620]: I0318 08:49:54.465527 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.468435 master-0 kubenswrapper[7620]: I0318 08:49:54.468374 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgplg"] Mar 18 08:49:54.474512 master-0 kubenswrapper[7620]: I0318 08:49:54.474469 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 08:49:54.474512 master-0 kubenswrapper[7620]: I0318 08:49:54.474490 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-s7cph" Mar 18 08:49:54.474512 master-0 kubenswrapper[7620]: I0318 08:49:54.474519 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 08:49:54.474832 master-0 kubenswrapper[7620]: I0318 08:49:54.474492 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 08:49:54.474832 master-0 kubenswrapper[7620]: I0318 08:49:54.474664 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 08:49:54.474832 master-0 kubenswrapper[7620]: I0318 08:49:54.474724 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 08:49:54.524893 master-0 kubenswrapper[7620]: I0318 08:49:54.524786 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5956076c-a98f-4846-9a68-81c18211a5c8-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.524893 master-0 kubenswrapper[7620]: I0318 08:49:54.524840 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf9qq\" (UniqueName: \"kubernetes.io/projected/5956076c-a98f-4846-9a68-81c18211a5c8-kube-api-access-jf9qq\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.524893 master-0 kubenswrapper[7620]: I0318 08:49:54.524884 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.525308 master-0 kubenswrapper[7620]: I0318 08:49:54.524945 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-config\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.626830 master-0 kubenswrapper[7620]: I0318 08:49:54.626751 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf9qq\" (UniqueName: \"kubernetes.io/projected/5956076c-a98f-4846-9a68-81c18211a5c8-kube-api-access-jf9qq\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.627251 master-0 kubenswrapper[7620]: I0318 08:49:54.627213 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.627472 master-0 kubenswrapper[7620]: I0318 08:49:54.627445 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-config\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.627701 master-0 kubenswrapper[7620]: I0318 08:49:54.627676 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5956076c-a98f-4846-9a68-81c18211a5c8-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.628056 master-0 kubenswrapper[7620]: I0318 08:49:54.628000 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.628275 master-0 kubenswrapper[7620]: I0318 08:49:54.628218 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-config\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.633557 master-0 kubenswrapper[7620]: I0318 08:49:54.633510 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5956076c-a98f-4846-9a68-81c18211a5c8-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:54.636708 master-0 kubenswrapper[7620]: W0318 08:49:54.636655 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd72cacbe_f050_4b00_b20d_6e3c800db5e3.slice/crio-9da72a97eb2b299f530fe3886d783b1eae63e297264297b40194bd3eb47a397a WatchSource:0}: Error finding container 9da72a97eb2b299f530fe3886d783b1eae63e297264297b40194bd3eb47a397a: Status 404 returned error can't find the container with id 9da72a97eb2b299f530fe3886d783b1eae63e297264297b40194bd3eb47a397a Mar 18 08:49:54.882668 master-0 kubenswrapper[7620]: I0318 08:49:54.882526 7620 patch_prober.go:28] interesting pod/packageserver-5f48d895dc-ttr9f container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:49:54.882668 master-0 kubenswrapper[7620]: I0318 08:49:54.882656 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" podUID="1794b726-5c0d-4a72-8ddd-418a2cbd8ded" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.54:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 08:49:54.890483 master-0 kubenswrapper[7620]: I0318 08:49:54.890403 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgplg" event={"ID":"d72cacbe-f050-4b00-b20d-6e3c800db5e3","Type":"ContainerStarted","Data":"9da72a97eb2b299f530fe3886d783b1eae63e297264297b40194bd3eb47a397a"} Mar 18 08:49:55.756460 master-0 kubenswrapper[7620]: I0318 08:49:55.754382 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" podStartSLOduration=6.754343376 podStartE2EDuration="6.754343376s" podCreationTimestamp="2026-03-18 08:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:55.733732929 +0000 UTC m=+59.728514751" watchObservedRunningTime="2026-03-18 08:49:55.754343376 +0000 UTC m=+59.749125168" Mar 18 08:49:55.769603 master-0 kubenswrapper[7620]: I0318 08:49:55.766135 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf9qq\" (UniqueName: \"kubernetes.io/projected/5956076c-a98f-4846-9a68-81c18211a5c8-kube-api-access-jf9qq\") pod \"machine-approver-6cb57bb5db-sxx69\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:55.771913 master-0 kubenswrapper[7620]: I0318 08:49:55.770451 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:49:55.772551 master-0 kubenswrapper[7620]: I0318 08:49:55.772244 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="ace4267e-c38d-46dd-9de6-c23339729a8b" containerName="installer" containerID="cri-o://c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c" gracePeriod=30 Mar 18 08:49:55.895967 master-0 kubenswrapper[7620]: I0318 08:49:55.891247 7620 patch_prober.go:28] interesting pod/packageserver-5f48d895dc-ttr9f container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:49:55.895967 master-0 kubenswrapper[7620]: I0318 08:49:55.891340 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" podUID="1794b726-5c0d-4a72-8ddd-418a2cbd8ded" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.54:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:49:55.899364 master-0 kubenswrapper[7620]: I0318 08:49:55.899315 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgplg" event={"ID":"d72cacbe-f050-4b00-b20d-6e3c800db5e3","Type":"ContainerStarted","Data":"f11de43d97f3eb0705ee274fd9f116f7e697707e7bd79e0504efdd85e51224f7"} Mar 18 08:49:55.992105 master-0 kubenswrapper[7620]: I0318 08:49:55.992045 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:49:56.913951 master-0 kubenswrapper[7620]: I0318 08:49:56.913890 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_e9a3f4dd-913d-4707-84c5-d64ead736f0f/installer/0.log" Mar 18 08:49:56.914509 master-0 kubenswrapper[7620]: I0318 08:49:56.913978 7620 generic.go:334] "Generic (PLEG): container finished" podID="e9a3f4dd-913d-4707-84c5-d64ead736f0f" containerID="5e0c3ea7554f76fe478ba87238a8f52a7e84e0ca4323bf58986273a5880e93c2" exitCode=1 Mar 18 08:49:56.914509 master-0 kubenswrapper[7620]: I0318 08:49:56.914061 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"e9a3f4dd-913d-4707-84c5-d64ead736f0f","Type":"ContainerDied","Data":"5e0c3ea7554f76fe478ba87238a8f52a7e84e0ca4323bf58986273a5880e93c2"} Mar 18 08:49:56.916642 master-0 kubenswrapper[7620]: I0318 08:49:56.916607 7620 generic.go:334] "Generic (PLEG): container finished" podID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerID="f11de43d97f3eb0705ee274fd9f116f7e697707e7bd79e0504efdd85e51224f7" exitCode=0 Mar 18 08:49:56.916726 master-0 kubenswrapper[7620]: I0318 08:49:56.916650 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgplg" event={"ID":"d72cacbe-f050-4b00-b20d-6e3c800db5e3","Type":"ContainerDied","Data":"f11de43d97f3eb0705ee274fd9f116f7e697707e7bd79e0504efdd85e51224f7"} Mar 18 08:49:57.251866 master-0 kubenswrapper[7620]: I0318 08:49:57.251826 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_e9a3f4dd-913d-4707-84c5-d64ead736f0f/installer/0.log" Mar 18 08:49:57.252019 master-0 kubenswrapper[7620]: I0318 08:49:57.251914 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:57.381960 master-0 kubenswrapper[7620]: I0318 08:49:57.381896 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kube-api-access\") pod \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " Mar 18 08:49:57.382173 master-0 kubenswrapper[7620]: I0318 08:49:57.382003 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-var-lock\") pod \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " Mar 18 08:49:57.382173 master-0 kubenswrapper[7620]: I0318 08:49:57.382068 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-var-lock" (OuterVolumeSpecName: "var-lock") pod "e9a3f4dd-913d-4707-84c5-d64ead736f0f" (UID: "e9a3f4dd-913d-4707-84c5-d64ead736f0f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:57.382173 master-0 kubenswrapper[7620]: I0318 08:49:57.382156 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kubelet-dir\") pod \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\" (UID: \"e9a3f4dd-913d-4707-84c5-d64ead736f0f\") " Mar 18 08:49:57.382309 master-0 kubenswrapper[7620]: I0318 08:49:57.382237 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e9a3f4dd-913d-4707-84c5-d64ead736f0f" (UID: "e9a3f4dd-913d-4707-84c5-d64ead736f0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:57.382449 master-0 kubenswrapper[7620]: I0318 08:49:57.382427 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:57.382449 master-0 kubenswrapper[7620]: I0318 08:49:57.382446 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e9a3f4dd-913d-4707-84c5-d64ead736f0f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:57.385421 master-0 kubenswrapper[7620]: I0318 08:49:57.385382 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e9a3f4dd-913d-4707-84c5-d64ead736f0f" (UID: "e9a3f4dd-913d-4707-84c5-d64ead736f0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:57.484132 master-0 kubenswrapper[7620]: I0318 08:49:57.484052 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9a3f4dd-913d-4707-84c5-d64ead736f0f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:57.875373 master-0 kubenswrapper[7620]: I0318 08:49:57.875292 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8"] Mar 18 08:49:57.875695 master-0 kubenswrapper[7620]: E0318 08:49:57.875581 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a3f4dd-913d-4707-84c5-d64ead736f0f" containerName="installer" Mar 18 08:49:57.875695 master-0 kubenswrapper[7620]: I0318 08:49:57.875601 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a3f4dd-913d-4707-84c5-d64ead736f0f" containerName="installer" Mar 18 08:49:57.875764 master-0 kubenswrapper[7620]: I0318 08:49:57.875723 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a3f4dd-913d-4707-84c5-d64ead736f0f" containerName="installer" Mar 18 08:49:57.876437 master-0 kubenswrapper[7620]: I0318 08:49:57.876396 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:57.882920 master-0 kubenswrapper[7620]: I0318 08:49:57.882867 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 08:49:57.883054 master-0 kubenswrapper[7620]: I0318 08:49:57.882964 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 08:49:57.883054 master-0 kubenswrapper[7620]: I0318 08:49:57.882970 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 08:49:57.892413 master-0 kubenswrapper[7620]: I0318 08:49:57.892354 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 08:49:57.932349 master-0 kubenswrapper[7620]: I0318 08:49:57.932295 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" event={"ID":"5956076c-a98f-4846-9a68-81c18211a5c8","Type":"ContainerStarted","Data":"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35"} Mar 18 08:49:57.932749 master-0 kubenswrapper[7620]: I0318 08:49:57.932357 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" event={"ID":"5956076c-a98f-4846-9a68-81c18211a5c8","Type":"ContainerStarted","Data":"2c446c191e6a35b6bb10e2916b38e6cd1d112507feaa55170c5bfc4a8449236e"} Mar 18 08:49:57.934598 master-0 kubenswrapper[7620]: I0318 08:49:57.934556 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" event={"ID":"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf","Type":"ContainerStarted","Data":"0fd3855d3d4e49dbbbd6fbd3a0b7de23ed78bc7af2b1a5b78f4de3c1bee51d0a"} Mar 18 08:49:57.936692 master-0 kubenswrapper[7620]: I0318 08:49:57.936666 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_e9a3f4dd-913d-4707-84c5-d64ead736f0f/installer/0.log" Mar 18 08:49:57.936770 master-0 kubenswrapper[7620]: I0318 08:49:57.936725 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"e9a3f4dd-913d-4707-84c5-d64ead736f0f","Type":"ContainerDied","Data":"8db5165e7230354d49e216b22d1bddbbd6c0d777cfe8d00574e23d3656b914f1"} Mar 18 08:49:57.936810 master-0 kubenswrapper[7620]: I0318 08:49:57.936783 7620 scope.go:117] "RemoveContainer" containerID="5e0c3ea7554f76fe478ba87238a8f52a7e84e0ca4323bf58986273a5880e93c2" Mar 18 08:49:57.936983 master-0 kubenswrapper[7620]: I0318 08:49:57.936962 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:57.990942 master-0 kubenswrapper[7620]: I0318 08:49:57.990867 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:57.991159 master-0 kubenswrapper[7620]: I0318 08:49:57.990974 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b689k\" (UniqueName: \"kubernetes.io/projected/e64ea71a-1e89-409a-9607-4d3cea093643-kube-api-access-b689k\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:57.991532 master-0 kubenswrapper[7620]: I0318 08:49:57.991308 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:58.093498 master-0 kubenswrapper[7620]: I0318 08:49:58.093424 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:58.093715 master-0 kubenswrapper[7620]: I0318 08:49:58.093604 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b689k\" (UniqueName: \"kubernetes.io/projected/e64ea71a-1e89-409a-9607-4d3cea093643-kube-api-access-b689k\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:58.093715 master-0 kubenswrapper[7620]: I0318 08:49:58.093660 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:58.094788 master-0 kubenswrapper[7620]: I0318 08:49:58.094758 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:58.098358 master-0 kubenswrapper[7620]: I0318 08:49:58.098318 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:58.298002 master-0 kubenswrapper[7620]: I0318 08:49:58.290997 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8"] Mar 18 08:49:58.821796 master-0 kubenswrapper[7620]: I0318 08:49:58.821746 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 08:49:58.823464 master-0 kubenswrapper[7620]: I0318 08:49:58.823441 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:58.834536 master-0 kubenswrapper[7620]: I0318 08:49:58.834472 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b689k\" (UniqueName: \"kubernetes.io/projected/e64ea71a-1e89-409a-9607-4d3cea093643-kube-api-access-b689k\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:58.836738 master-0 kubenswrapper[7620]: I0318 08:49:58.836693 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn"] Mar 18 08:49:58.854126 master-0 kubenswrapper[7620]: I0318 08:49:58.852467 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:58.857052 master-0 kubenswrapper[7620]: I0318 08:49:58.857019 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 08:49:58.857258 master-0 kubenswrapper[7620]: I0318 08:49:58.857225 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 08:49:58.857380 master-0 kubenswrapper[7620]: I0318 08:49:58.857347 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 08:49:58.857503 master-0 kubenswrapper[7620]: I0318 08:49:58.857473 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 08:49:58.863258 master-0 kubenswrapper[7620]: I0318 08:49:58.863195 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 08:49:58.870576 master-0 kubenswrapper[7620]: I0318 08:49:58.870440 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh"] Mar 18 08:49:58.873184 master-0 kubenswrapper[7620]: I0318 08:49:58.873140 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:49:58.880339 master-0 kubenswrapper[7620]: I0318 08:49:58.880288 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 08:49:58.880445 master-0 kubenswrapper[7620]: I0318 08:49:58.880393 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 08:49:58.880638 master-0 kubenswrapper[7620]: I0318 08:49:58.880615 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 08:49:58.880812 master-0 kubenswrapper[7620]: I0318 08:49:58.880786 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtxm4" Mar 18 08:49:58.908297 master-0 kubenswrapper[7620]: I0318 08:49:58.906812 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn"] Mar 18 08:49:58.908297 master-0 kubenswrapper[7620]: I0318 08:49:58.906889 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:49:58.908297 master-0 kubenswrapper[7620]: I0318 08:49:58.907673 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9rk\" (UniqueName: \"kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:58.908297 master-0 kubenswrapper[7620]: I0318 08:49:58.907715 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:58.908297 master-0 kubenswrapper[7620]: I0318 08:49:58.907745 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:58.908297 master-0 kubenswrapper[7620]: I0318 08:49:58.907782 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-var-lock\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:58.908297 master-0 kubenswrapper[7620]: I0318 08:49:58.907811 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:58.908297 master-0 kubenswrapper[7620]: I0318 08:49:58.907839 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:58.928057 master-0 kubenswrapper[7620]: I0318 08:49:58.927764 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:58.931588 master-0 kubenswrapper[7620]: I0318 08:49:58.930510 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh"] Mar 18 08:49:58.931588 master-0 kubenswrapper[7620]: I0318 08:49:58.931340 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtz82\" (UniqueName: \"kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:49:58.931588 master-0 kubenswrapper[7620]: I0318 08:49:58.931435 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:49:58.931588 master-0 kubenswrapper[7620]: I0318 08:49:58.931519 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:59.016938 master-0 kubenswrapper[7620]: I0318 08:49:59.016827 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:49:59.037711 master-0 kubenswrapper[7620]: I0318 08:49:59.037644 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:49:59.037998 master-0 kubenswrapper[7620]: I0318 08:49:59.037736 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:59.037998 master-0 kubenswrapper[7620]: I0318 08:49:59.037784 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9rk\" (UniqueName: \"kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.037998 master-0 kubenswrapper[7620]: I0318 08:49:59.037821 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.037998 master-0 kubenswrapper[7620]: I0318 08:49:59.037876 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.037998 master-0 kubenswrapper[7620]: I0318 08:49:59.037922 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-var-lock\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:59.037998 master-0 kubenswrapper[7620]: I0318 08:49:59.037956 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.037998 master-0 kubenswrapper[7620]: I0318 08:49:59.037989 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.051205 7620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.051267 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.051770 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-var-lock\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: E0318 08:49:59.051807 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.051829 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: E0318 08:49:59.051846 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.051870 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.052081 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.052095 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.052422 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de" gracePeriod=30 Mar 18 08:49:59.057397 master-0 kubenswrapper[7620]: I0318 08:49:59.052598 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f" gracePeriod=30 Mar 18 08:49:59.058973 master-0 kubenswrapper[7620]: I0318 08:49:59.058923 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.059843 master-0 kubenswrapper[7620]: I0318 08:49:59.059811 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.072225 master-0 kubenswrapper[7620]: I0318 08:49:59.061352 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.073575 master-0 kubenswrapper[7620]: I0318 08:49:59.073506 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:59.073575 master-0 kubenswrapper[7620]: I0318 08:49:59.073531 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:49:59.073671 master-0 kubenswrapper[7620]: I0318 08:49:59.073595 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtz82\" (UniqueName: \"kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:49:59.081506 master-0 kubenswrapper[7620]: I0318 08:49:59.073763 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:49:59.082002 master-0 kubenswrapper[7620]: I0318 08:49:59.081962 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.098171 master-0 kubenswrapper[7620]: I0318 08:49:59.094723 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:49:59.128416 master-0 kubenswrapper[7620]: I0318 08:49:59.127404 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:49:59.204468 master-0 kubenswrapper[7620]: I0318 08:49:59.204407 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.204693 master-0 kubenswrapper[7620]: I0318 08:49:59.204505 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.204693 master-0 kubenswrapper[7620]: I0318 08:49:59.204583 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.204693 master-0 kubenswrapper[7620]: I0318 08:49:59.204610 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.204693 master-0 kubenswrapper[7620]: I0318 08:49:59.204669 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.204693 master-0 kubenswrapper[7620]: I0318 08:49:59.204697 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.305932 master-0 kubenswrapper[7620]: I0318 08:49:59.305883 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306149 master-0 kubenswrapper[7620]: I0318 08:49:59.305953 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306149 master-0 kubenswrapper[7620]: I0318 08:49:59.305981 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306149 master-0 kubenswrapper[7620]: I0318 08:49:59.306007 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306149 master-0 kubenswrapper[7620]: I0318 08:49:59.306051 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306149 master-0 kubenswrapper[7620]: I0318 08:49:59.306072 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306665 master-0 kubenswrapper[7620]: I0318 08:49:59.306482 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306665 master-0 kubenswrapper[7620]: I0318 08:49:59.306526 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306665 master-0 kubenswrapper[7620]: I0318 08:49:59.306555 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306665 master-0 kubenswrapper[7620]: I0318 08:49:59.306585 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306665 master-0 kubenswrapper[7620]: I0318 08:49:59.306612 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:59.306665 master-0 kubenswrapper[7620]: I0318 08:49:59.306641 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:50:00.021225 master-0 kubenswrapper[7620]: I0318 08:50:00.021171 7620 generic.go:334] "Generic (PLEG): container finished" podID="1ecff6b2-dbd4-4366-873b-2170d0b76c0f" containerID="010b44e43896597007413d73633a4236214230adb7cc7835885b7a52a1e627ab" exitCode=0 Mar 18 08:50:00.021744 master-0 kubenswrapper[7620]: I0318 08:50:00.021268 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"1ecff6b2-dbd4-4366-873b-2170d0b76c0f","Type":"ContainerDied","Data":"010b44e43896597007413d73633a4236214230adb7cc7835885b7a52a1e627ab"} Mar 18 08:50:00.024363 master-0 kubenswrapper[7620]: I0318 08:50:00.023845 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ck7b5" event={"ID":"b35ab145-16a7-4ef1-86e8-0afb6ff469fd","Type":"ContainerStarted","Data":"8cda013d1bf7f63cc98785d628df6f7e69c4bf9d06a913ff50c30f25ae46a743"} Mar 18 08:50:00.024363 master-0 kubenswrapper[7620]: I0318 08:50:00.023903 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ck7b5" event={"ID":"b35ab145-16a7-4ef1-86e8-0afb6ff469fd","Type":"ContainerStarted","Data":"28a7640865a7ded8ecbd5b4201a5de35ace75806d5d4d35f9798e9f10dd77de6"} Mar 18 08:50:00.024363 master-0 kubenswrapper[7620]: I0318 08:50:00.024057 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ck7b5" Mar 18 08:50:00.231877 master-0 kubenswrapper[7620]: I0318 08:50:00.231788 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a3f4dd-913d-4707-84c5-d64ead736f0f" path="/var/lib/kubelet/pods/e9a3f4dd-913d-4707-84c5-d64ead736f0f/volumes" Mar 18 08:50:00.238022 master-0 kubenswrapper[7620]: I0318 08:50:00.237989 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 08:50:01.031677 master-0 kubenswrapper[7620]: I0318 08:50:01.031581 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" event={"ID":"5956076c-a98f-4846-9a68-81c18211a5c8","Type":"ContainerStarted","Data":"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978"} Mar 18 08:50:01.343950 master-0 kubenswrapper[7620]: I0318 08:50:01.343878 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 08:50:01.443223 master-0 kubenswrapper[7620]: I0318 08:50:01.443173 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kubelet-dir\") pod \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " Mar 18 08:50:01.443488 master-0 kubenswrapper[7620]: I0318 08:50:01.443457 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-var-lock\") pod \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " Mar 18 08:50:01.443594 master-0 kubenswrapper[7620]: I0318 08:50:01.443549 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-var-lock" (OuterVolumeSpecName: "var-lock") pod "1ecff6b2-dbd4-4366-873b-2170d0b76c0f" (UID: "1ecff6b2-dbd4-4366-873b-2170d0b76c0f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:50:01.443647 master-0 kubenswrapper[7620]: I0318 08:50:01.443610 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1ecff6b2-dbd4-4366-873b-2170d0b76c0f" (UID: "1ecff6b2-dbd4-4366-873b-2170d0b76c0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:50:01.443727 master-0 kubenswrapper[7620]: I0318 08:50:01.443639 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kube-api-access\") pod \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\" (UID: \"1ecff6b2-dbd4-4366-873b-2170d0b76c0f\") " Mar 18 08:50:01.445405 master-0 kubenswrapper[7620]: I0318 08:50:01.445380 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:01.445405 master-0 kubenswrapper[7620]: I0318 08:50:01.445407 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:01.448485 master-0 kubenswrapper[7620]: I0318 08:50:01.448434 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1ecff6b2-dbd4-4366-873b-2170d0b76c0f" (UID: "1ecff6b2-dbd4-4366-873b-2170d0b76c0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:50:01.547507 master-0 kubenswrapper[7620]: I0318 08:50:01.547465 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ecff6b2-dbd4-4366-873b-2170d0b76c0f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:02.040501 master-0 kubenswrapper[7620]: I0318 08:50:02.040459 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 08:50:02.041060 master-0 kubenswrapper[7620]: I0318 08:50:02.040501 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"1ecff6b2-dbd4-4366-873b-2170d0b76c0f","Type":"ContainerDied","Data":"cff5a62c6fe250b627c150b3ba60d6fe2a04d4b96c22543f1ae21c885d156295"} Mar 18 08:50:02.041060 master-0 kubenswrapper[7620]: I0318 08:50:02.040575 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff5a62c6fe250b627c150b3ba60d6fe2a04d4b96c22543f1ae21c885d156295" Mar 18 08:50:10.064175 master-0 kubenswrapper[7620]: I0318 08:50:10.064080 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ck7b5" Mar 18 08:50:11.094697 master-0 kubenswrapper[7620]: I0318 08:50:11.094605 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-r758j_772bc250-2e57-4ce0-883c-d44281fcb0be/openshift-controller-manager-operator/0.log" Mar 18 08:50:11.095298 master-0 kubenswrapper[7620]: I0318 08:50:11.094699 7620 generic.go:334] "Generic (PLEG): container finished" podID="772bc250-2e57-4ce0-883c-d44281fcb0be" containerID="fb1d8cdaae1091b519c657021dc4e61ba66eba83ec8f94dd444327353dc0ffc0" exitCode=1 Mar 18 08:50:11.095298 master-0 kubenswrapper[7620]: I0318 08:50:11.094757 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" event={"ID":"772bc250-2e57-4ce0-883c-d44281fcb0be","Type":"ContainerDied","Data":"fb1d8cdaae1091b519c657021dc4e61ba66eba83ec8f94dd444327353dc0ffc0"} Mar 18 08:50:11.095548 master-0 kubenswrapper[7620]: I0318 08:50:11.095509 7620 scope.go:117] "RemoveContainer" containerID="fb1d8cdaae1091b519c657021dc4e61ba66eba83ec8f94dd444327353dc0ffc0" Mar 18 08:50:12.231799 master-0 kubenswrapper[7620]: E0318 08:50:12.231713 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:50:12.233097 master-0 kubenswrapper[7620]: I0318 08:50:12.232450 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:50:14.112192 master-0 kubenswrapper[7620]: I0318 08:50:14.112133 7620 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="6be6b0de4a5d0386d8a94651962cc0001d3124e6eb513e3b68435d030ea24841" exitCode=1 Mar 18 08:50:14.112621 master-0 kubenswrapper[7620]: I0318 08:50:14.112242 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"6be6b0de4a5d0386d8a94651962cc0001d3124e6eb513e3b68435d030ea24841"} Mar 18 08:50:14.114209 master-0 kubenswrapper[7620]: I0318 08:50:14.113908 7620 scope.go:117] "RemoveContainer" containerID="cae6edc05ec437bf1216d8818e262c95bff15d2f9aa2f76f2a55bc0b5ab23801" Mar 18 08:50:14.114300 master-0 kubenswrapper[7620]: I0318 08:50:14.114232 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"f787bcf5a5597e509c2c61b42e1f1bb1b89a19513d6cb88626b2f1d8175e9a65"} Mar 18 08:50:14.114814 master-0 kubenswrapper[7620]: I0318 08:50:14.114761 7620 scope.go:117] "RemoveContainer" containerID="6be6b0de4a5d0386d8a94651962cc0001d3124e6eb513e3b68435d030ea24841" Mar 18 08:50:15.544087 master-0 kubenswrapper[7620]: I0318 08:50:15.543974 7620 generic.go:334] "Generic (PLEG): container finished" podID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerID="4d5c18186f643b1a4f079e60d0bd9e03dcffe8e2274cd8cd7f1881659ac942b3" exitCode=0 Mar 18 08:50:15.544941 master-0 kubenswrapper[7620]: I0318 08:50:15.544130 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfq8l" event={"ID":"95843eb5-33bc-48e8-afc4-a0bd8c524e24","Type":"ContainerDied","Data":"4d5c18186f643b1a4f079e60d0bd9e03dcffe8e2274cd8cd7f1881659ac942b3"} Mar 18 08:50:15.549521 master-0 kubenswrapper[7620]: I0318 08:50:15.547799 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffks8" event={"ID":"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591","Type":"ContainerStarted","Data":"b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240"} Mar 18 08:50:15.551432 master-0 kubenswrapper[7620]: I0318 08:50:15.551372 7620 generic.go:334] "Generic (PLEG): container finished" podID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerID="515eb31f006a3681b4b8a4d7b68b6a09e8acc9b88a57a1196829487e2994618c" exitCode=0 Mar 18 08:50:15.551566 master-0 kubenswrapper[7620]: I0318 08:50:15.551461 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgplg" event={"ID":"d72cacbe-f050-4b00-b20d-6e3c800db5e3","Type":"ContainerDied","Data":"515eb31f006a3681b4b8a4d7b68b6a09e8acc9b88a57a1196829487e2994618c"} Mar 18 08:50:15.555875 master-0 kubenswrapper[7620]: I0318 08:50:15.555766 7620 generic.go:334] "Generic (PLEG): container finished" podID="833eeb49-a463-432a-a684-a27c66ecae7d" containerID="85878e2d9501d02753146dd527d49eca7a595cbe551c93b013706469d444a4fe" exitCode=0 Mar 18 08:50:15.556024 master-0 kubenswrapper[7620]: I0318 08:50:15.555962 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m4q6" event={"ID":"833eeb49-a463-432a-a684-a27c66ecae7d","Type":"ContainerDied","Data":"85878e2d9501d02753146dd527d49eca7a595cbe551c93b013706469d444a4fe"} Mar 18 08:50:15.561936 master-0 kubenswrapper[7620]: I0318 08:50:15.561842 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-r758j_772bc250-2e57-4ce0-883c-d44281fcb0be/openshift-controller-manager-operator/0.log" Mar 18 08:50:15.562046 master-0 kubenswrapper[7620]: I0318 08:50:15.561999 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" event={"ID":"772bc250-2e57-4ce0-883c-d44281fcb0be","Type":"ContainerStarted","Data":"68c182737418e9669eaac13452772ca1c2ae8aee346c01ee805c7e53f0e3ed8b"} Mar 18 08:50:15.564958 master-0 kubenswrapper[7620]: I0318 08:50:15.564898 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a"} Mar 18 08:50:15.569947 master-0 kubenswrapper[7620]: I0318 08:50:15.568287 7620 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85" exitCode=0 Mar 18 08:50:15.569947 master-0 kubenswrapper[7620]: I0318 08:50:15.568339 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85"} Mar 18 08:50:15.639095 master-0 kubenswrapper[7620]: I0318 08:50:15.638969 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:15.639095 master-0 kubenswrapper[7620]: I0318 08:50:15.639063 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:16.576528 master-0 kubenswrapper[7620]: I0318 08:50:16.576462 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m4q6" event={"ID":"833eeb49-a463-432a-a684-a27c66ecae7d","Type":"ContainerStarted","Data":"b9b14f7f666700509a5494b067b2a60b7cb42e06b28d07a9c4945f482a1d974b"} Mar 18 08:50:16.578342 master-0 kubenswrapper[7620]: I0318 08:50:16.578311 7620 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="56c1813fc6a99c6be68188fda55c9aa95683f9493caa43861ba04693d0ba89d2" exitCode=1 Mar 18 08:50:16.578400 master-0 kubenswrapper[7620]: I0318 08:50:16.578367 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"56c1813fc6a99c6be68188fda55c9aa95683f9493caa43861ba04693d0ba89d2"} Mar 18 08:50:16.578649 master-0 kubenswrapper[7620]: I0318 08:50:16.578622 7620 scope.go:117] "RemoveContainer" containerID="56c1813fc6a99c6be68188fda55c9aa95683f9493caa43861ba04693d0ba89d2" Mar 18 08:50:16.581832 master-0 kubenswrapper[7620]: I0318 08:50:16.581806 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfq8l" event={"ID":"95843eb5-33bc-48e8-afc4-a0bd8c524e24","Type":"ContainerStarted","Data":"cf6929903f6267ae579fcfd9810a3ba405d86b38c45e7d904736f156b99ba651"} Mar 18 08:50:16.583458 master-0 kubenswrapper[7620]: I0318 08:50:16.583434 7620 generic.go:334] "Generic (PLEG): container finished" podID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerID="b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240" exitCode=0 Mar 18 08:50:16.583512 master-0 kubenswrapper[7620]: I0318 08:50:16.583480 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffks8" event={"ID":"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591","Type":"ContainerDied","Data":"b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240"} Mar 18 08:50:16.587324 master-0 kubenswrapper[7620]: I0318 08:50:16.587296 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgplg" event={"ID":"d72cacbe-f050-4b00-b20d-6e3c800db5e3","Type":"ContainerStarted","Data":"963e77396932fd5dde20fd2229477fc2520d4deed14e4daee66a481b11a60005"} Mar 18 08:50:16.946874 master-0 kubenswrapper[7620]: I0318 08:50:16.946798 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:16.947056 master-0 kubenswrapper[7620]: I0318 08:50:16.946898 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:17.595462 master-0 kubenswrapper[7620]: I0318 08:50:17.595410 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffks8" event={"ID":"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591","Type":"ContainerStarted","Data":"fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2"} Mar 18 08:50:17.598109 master-0 kubenswrapper[7620]: I0318 08:50:17.598044 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"db516bae26a48292c2104c2ecfafa39292fbbc58aaf43ed786161ac8d6801cb8"} Mar 18 08:50:17.955774 master-0 kubenswrapper[7620]: I0318 08:50:17.955650 7620 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-5g8tz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 18 08:50:17.955774 master-0 kubenswrapper[7620]: I0318 08:50:17.955725 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" podUID="c110b293-2c6b-496b-b015-23aada98cb4b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 18 08:50:18.325163 master-0 kubenswrapper[7620]: W0318 08:50:18.325083 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd72cacbe_f050_4b00_b20d_6e3c800db5e3.slice/crio-conmon-515eb31f006a3681b4b8a4d7b68b6a09e8acc9b88a57a1196829487e2994618c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd72cacbe_f050_4b00_b20d_6e3c800db5e3.slice/crio-conmon-515eb31f006a3681b4b8a4d7b68b6a09e8acc9b88a57a1196829487e2994618c.scope: no such file or directory Mar 18 08:50:18.325637 master-0 kubenswrapper[7620]: W0318 08:50:18.325559 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833eeb49_a463_432a_a684_a27c66ecae7d.slice/crio-conmon-85878e2d9501d02753146dd527d49eca7a595cbe551c93b013706469d444a4fe.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833eeb49_a463_432a_a684_a27c66ecae7d.slice/crio-conmon-85878e2d9501d02753146dd527d49eca7a595cbe551c93b013706469d444a4fe.scope: no such file or directory Mar 18 08:50:18.325917 master-0 kubenswrapper[7620]: W0318 08:50:18.325888 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95843eb5_33bc_48e8_afc4_a0bd8c524e24.slice/crio-conmon-4d5c18186f643b1a4f079e60d0bd9e03dcffe8e2274cd8cd7f1881659ac942b3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95843eb5_33bc_48e8_afc4_a0bd8c524e24.slice/crio-conmon-4d5c18186f643b1a4f079e60d0bd9e03dcffe8e2274cd8cd7f1881659ac942b3.scope: no such file or directory Mar 18 08:50:18.326117 master-0 kubenswrapper[7620]: W0318 08:50:18.326090 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd72cacbe_f050_4b00_b20d_6e3c800db5e3.slice/crio-515eb31f006a3681b4b8a4d7b68b6a09e8acc9b88a57a1196829487e2994618c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd72cacbe_f050_4b00_b20d_6e3c800db5e3.slice/crio-515eb31f006a3681b4b8a4d7b68b6a09e8acc9b88a57a1196829487e2994618c.scope: no such file or directory Mar 18 08:50:18.326315 master-0 kubenswrapper[7620]: W0318 08:50:18.326291 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833eeb49_a463_432a_a684_a27c66ecae7d.slice/crio-85878e2d9501d02753146dd527d49eca7a595cbe551c93b013706469d444a4fe.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833eeb49_a463_432a_a684_a27c66ecae7d.slice/crio-85878e2d9501d02753146dd527d49eca7a595cbe551c93b013706469d444a4fe.scope: no such file or directory Mar 18 08:50:18.326499 master-0 kubenswrapper[7620]: W0318 08:50:18.326472 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95843eb5_33bc_48e8_afc4_a0bd8c524e24.slice/crio-4d5c18186f643b1a4f079e60d0bd9e03dcffe8e2274cd8cd7f1881659ac942b3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95843eb5_33bc_48e8_afc4_a0bd8c524e24.slice/crio-4d5c18186f643b1a4f079e60d0bd9e03dcffe8e2274cd8cd7f1881659ac942b3.scope: no such file or directory Mar 18 08:50:18.326695 master-0 kubenswrapper[7620]: W0318 08:50:18.326666 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-conmon-a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85.scope: no such file or directory Mar 18 08:50:18.326917 master-0 kubenswrapper[7620]: W0318 08:50:18.326890 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5c7ffb1_a1ab_4ca1_bdae_bcb09a759591.slice/crio-conmon-b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5c7ffb1_a1ab_4ca1_bdae_bcb09a759591.slice/crio-conmon-b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240.scope: no such file or directory Mar 18 08:50:18.327124 master-0 kubenswrapper[7620]: W0318 08:50:18.327097 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85.scope: no such file or directory Mar 18 08:50:18.327341 master-0 kubenswrapper[7620]: W0318 08:50:18.327312 7620 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5c7ffb1_a1ab_4ca1_bdae_bcb09a759591.slice/crio-b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5c7ffb1_a1ab_4ca1_bdae_bcb09a759591.slice/crio-b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240.scope: no such file or directory Mar 18 08:50:18.374888 master-0 kubenswrapper[7620]: E0318 08:50:18.371980 7620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod1ecff6b2_dbd4_4366_873b_2170d0b76c0f.slice/crio-010b44e43896597007413d73633a4236214230adb7cc7835885b7a52a1e627ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pode9a3f4dd_913d_4707_84c5_d64ead736f0f.slice/crio-8db5165e7230354d49e216b22d1bddbbd6c0d777cfe8d00574e23d3656b914f1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod772bc250_2e57_4ce0_883c_d44281fcb0be.slice/crio-fb1d8cdaae1091b519c657021dc4e61ba66eba83ec8f94dd444327353dc0ffc0.scope\": RecentStats: unable to find data in memory cache]" Mar 18 08:50:18.523578 master-0 kubenswrapper[7620]: E0318 08:50:18.523417 7620 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 18 08:50:18.604195 master-0 kubenswrapper[7620]: I0318 08:50:18.604149 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_ace4267e-c38d-46dd-9de6-c23339729a8b/installer/0.log" Mar 18 08:50:18.604929 master-0 kubenswrapper[7620]: I0318 08:50:18.604262 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:50:18.604929 master-0 kubenswrapper[7620]: I0318 08:50:18.604787 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_ace4267e-c38d-46dd-9de6-c23339729a8b/installer/0.log" Mar 18 08:50:18.604929 master-0 kubenswrapper[7620]: I0318 08:50:18.604841 7620 generic.go:334] "Generic (PLEG): container finished" podID="ace4267e-c38d-46dd-9de6-c23339729a8b" containerID="c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c" exitCode=1 Mar 18 08:50:18.605065 master-0 kubenswrapper[7620]: I0318 08:50:18.604912 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"ace4267e-c38d-46dd-9de6-c23339729a8b","Type":"ContainerDied","Data":"c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c"} Mar 18 08:50:18.605065 master-0 kubenswrapper[7620]: I0318 08:50:18.604994 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"ace4267e-c38d-46dd-9de6-c23339729a8b","Type":"ContainerDied","Data":"080fb9efe85e13956d4489a8523ef6b21588e8f16588b91bc928b76f222370cb"} Mar 18 08:50:18.605065 master-0 kubenswrapper[7620]: I0318 08:50:18.605022 7620 scope.go:117] "RemoveContainer" containerID="c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c" Mar 18 08:50:18.624998 master-0 kubenswrapper[7620]: I0318 08:50:18.624868 7620 scope.go:117] "RemoveContainer" containerID="c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c" Mar 18 08:50:18.625389 master-0 kubenswrapper[7620]: E0318 08:50:18.625342 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c\": container with ID starting with c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c not found: ID does not exist" containerID="c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c" Mar 18 08:50:18.625468 master-0 kubenswrapper[7620]: I0318 08:50:18.625388 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c"} err="failed to get container status \"c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c\": rpc error: code = NotFound desc = could not find container \"c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c\": container with ID starting with c7c35e8f88ea7cb3b4124ab73cb6f5940db3454c3992a104c973116512d26a7c not found: ID does not exist" Mar 18 08:50:18.636631 master-0 kubenswrapper[7620]: I0318 08:50:18.636590 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:50:18.636731 master-0 kubenswrapper[7620]: I0318 08:50:18.636649 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:50:18.638468 master-0 kubenswrapper[7620]: I0318 08:50:18.638437 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:18.638571 master-0 kubenswrapper[7620]: I0318 08:50:18.638483 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:18.679733 master-0 kubenswrapper[7620]: I0318 08:50:18.679673 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:50:18.679949 master-0 kubenswrapper[7620]: I0318 08:50:18.679752 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:50:18.776576 master-0 kubenswrapper[7620]: I0318 08:50:18.776444 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-kubelet-dir\") pod \"ace4267e-c38d-46dd-9de6-c23339729a8b\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " Mar 18 08:50:18.776576 master-0 kubenswrapper[7620]: I0318 08:50:18.776569 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ace4267e-c38d-46dd-9de6-c23339729a8b-kube-api-access\") pod \"ace4267e-c38d-46dd-9de6-c23339729a8b\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " Mar 18 08:50:18.776830 master-0 kubenswrapper[7620]: I0318 08:50:18.776584 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ace4267e-c38d-46dd-9de6-c23339729a8b" (UID: "ace4267e-c38d-46dd-9de6-c23339729a8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:50:18.777083 master-0 kubenswrapper[7620]: I0318 08:50:18.777031 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-var-lock\") pod \"ace4267e-c38d-46dd-9de6-c23339729a8b\" (UID: \"ace4267e-c38d-46dd-9de6-c23339729a8b\") " Mar 18 08:50:18.777173 master-0 kubenswrapper[7620]: I0318 08:50:18.777138 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-var-lock" (OuterVolumeSpecName: "var-lock") pod "ace4267e-c38d-46dd-9de6-c23339729a8b" (UID: "ace4267e-c38d-46dd-9de6-c23339729a8b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:50:18.777461 master-0 kubenswrapper[7620]: I0318 08:50:18.777437 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:18.777526 master-0 kubenswrapper[7620]: I0318 08:50:18.777465 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ace4267e-c38d-46dd-9de6-c23339729a8b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:18.788062 master-0 kubenswrapper[7620]: I0318 08:50:18.788014 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace4267e-c38d-46dd-9de6-c23339729a8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ace4267e-c38d-46dd-9de6-c23339729a8b" (UID: "ace4267e-c38d-46dd-9de6-c23339729a8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:50:18.805698 master-0 kubenswrapper[7620]: E0318 08:50:18.805511 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:50:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:50:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:50:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:50:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3\\\"],\\\"sizeBytes\\\":470681292},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:18.878487 master-0 kubenswrapper[7620]: I0318 08:50:18.878429 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ace4267e-c38d-46dd-9de6-c23339729a8b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:19.257511 master-0 kubenswrapper[7620]: I0318 08:50:19.257436 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:50:19.292436 master-0 kubenswrapper[7620]: I0318 08:50:19.292368 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 08:50:19.524761 master-0 kubenswrapper[7620]: I0318 08:50:19.524572 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:50:19.613876 master-0 kubenswrapper[7620]: I0318 08:50:19.613755 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:50:19.676433 master-0 kubenswrapper[7620]: I0318 08:50:19.676360 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xfq8l" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="registry-server" probeResult="failure" output=< Mar 18 08:50:19.676433 master-0 kubenswrapper[7620]: timeout: failed to connect service ":50051" within 1s Mar 18 08:50:19.676433 master-0 kubenswrapper[7620]: > Mar 18 08:50:19.718939 master-0 kubenswrapper[7620]: I0318 08:50:19.718832 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6m4q6" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="registry-server" probeResult="failure" output=< Mar 18 08:50:19.718939 master-0 kubenswrapper[7620]: timeout: failed to connect service ":50051" within 1s Mar 18 08:50:19.718939 master-0 kubenswrapper[7620]: > Mar 18 08:50:19.947401 master-0 kubenswrapper[7620]: I0318 08:50:19.947183 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:19.947401 master-0 kubenswrapper[7620]: I0318 08:50:19.947301 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:20.024319 master-0 kubenswrapper[7620]: I0318 08:50:20.024222 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:50:20.024319 master-0 kubenswrapper[7620]: I0318 08:50:20.024332 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:50:20.624839 master-0 kubenswrapper[7620]: I0318 08:50:20.624736 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1edfa49b-d0e7-4324-aace-b115b41ddae0/installer/0.log" Mar 18 08:50:20.625674 master-0 kubenswrapper[7620]: I0318 08:50:20.624931 7620 generic.go:334] "Generic (PLEG): container finished" podID="1edfa49b-d0e7-4324-aace-b115b41ddae0" containerID="91060a1df8ac508bd63d3fe87c3026c13bbc60c7a49e9b85f1b8ff384fcdd40b" exitCode=1 Mar 18 08:50:20.625674 master-0 kubenswrapper[7620]: I0318 08:50:20.625018 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"1edfa49b-d0e7-4324-aace-b115b41ddae0","Type":"ContainerDied","Data":"91060a1df8ac508bd63d3fe87c3026c13bbc60c7a49e9b85f1b8ff384fcdd40b"} Mar 18 08:50:21.062389 master-0 kubenswrapper[7620]: I0318 08:50:21.062286 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ffks8" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="registry-server" probeResult="failure" output=< Mar 18 08:50:21.062389 master-0 kubenswrapper[7620]: timeout: failed to connect service ":50051" within 1s Mar 18 08:50:21.062389 master-0 kubenswrapper[7620]: > Mar 18 08:50:21.229607 master-0 kubenswrapper[7620]: I0318 08:50:21.229543 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:50:21.229950 master-0 kubenswrapper[7620]: I0318 08:50:21.229904 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:50:21.307114 master-0 kubenswrapper[7620]: I0318 08:50:21.307057 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:50:21.639261 master-0 kubenswrapper[7620]: I0318 08:50:21.639189 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:21.639955 master-0 kubenswrapper[7620]: I0318 08:50:21.639275 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:21.639955 master-0 kubenswrapper[7620]: I0318 08:50:21.639360 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:50:21.640315 master-0 kubenswrapper[7620]: I0318 08:50:21.640243 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"f62239815e692aa3c0449919f3f1826c911a4a455ec560cd817c662d02c7a9ae"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 08:50:21.640395 master-0 kubenswrapper[7620]: I0318 08:50:21.640336 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" containerID="cri-o://f62239815e692aa3c0449919f3f1826c911a4a455ec560cd817c662d02c7a9ae" gracePeriod=30 Mar 18 08:50:21.693017 master-0 kubenswrapper[7620]: I0318 08:50:21.692945 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:50:22.012175 master-0 kubenswrapper[7620]: I0318 08:50:22.012122 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1edfa49b-d0e7-4324-aace-b115b41ddae0/installer/0.log" Mar 18 08:50:22.012348 master-0 kubenswrapper[7620]: I0318 08:50:22.012200 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:50:22.124808 master-0 kubenswrapper[7620]: I0318 08:50:22.124723 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-kubelet-dir\") pod \"1edfa49b-d0e7-4324-aace-b115b41ddae0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " Mar 18 08:50:22.124808 master-0 kubenswrapper[7620]: I0318 08:50:22.124803 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1edfa49b-d0e7-4324-aace-b115b41ddae0-kube-api-access\") pod \"1edfa49b-d0e7-4324-aace-b115b41ddae0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " Mar 18 08:50:22.125074 master-0 kubenswrapper[7620]: I0318 08:50:22.124886 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-var-lock\") pod \"1edfa49b-d0e7-4324-aace-b115b41ddae0\" (UID: \"1edfa49b-d0e7-4324-aace-b115b41ddae0\") " Mar 18 08:50:22.125074 master-0 kubenswrapper[7620]: I0318 08:50:22.124876 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1edfa49b-d0e7-4324-aace-b115b41ddae0" (UID: "1edfa49b-d0e7-4324-aace-b115b41ddae0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:50:22.125074 master-0 kubenswrapper[7620]: I0318 08:50:22.125033 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-var-lock" (OuterVolumeSpecName: "var-lock") pod "1edfa49b-d0e7-4324-aace-b115b41ddae0" (UID: "1edfa49b-d0e7-4324-aace-b115b41ddae0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:50:22.125216 master-0 kubenswrapper[7620]: I0318 08:50:22.125189 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:22.125216 master-0 kubenswrapper[7620]: I0318 08:50:22.125211 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1edfa49b-d0e7-4324-aace-b115b41ddae0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:22.129102 master-0 kubenswrapper[7620]: I0318 08:50:22.129051 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edfa49b-d0e7-4324-aace-b115b41ddae0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1edfa49b-d0e7-4324-aace-b115b41ddae0" (UID: "1edfa49b-d0e7-4324-aace-b115b41ddae0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:50:22.226817 master-0 kubenswrapper[7620]: I0318 08:50:22.226771 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1edfa49b-d0e7-4324-aace-b115b41ddae0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:22.525909 master-0 kubenswrapper[7620]: I0318 08:50:22.525660 7620 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:22.643486 master-0 kubenswrapper[7620]: I0318 08:50:22.643403 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1edfa49b-d0e7-4324-aace-b115b41ddae0/installer/0.log" Mar 18 08:50:22.644250 master-0 kubenswrapper[7620]: I0318 08:50:22.643654 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:50:22.644250 master-0 kubenswrapper[7620]: I0318 08:50:22.643659 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"1edfa49b-d0e7-4324-aace-b115b41ddae0","Type":"ContainerDied","Data":"be0a7a0ac0aa5258d96034f680e2106c4672594f5322381bd2ce5d9a5f255068"} Mar 18 08:50:22.644250 master-0 kubenswrapper[7620]: I0318 08:50:22.643744 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be0a7a0ac0aa5258d96034f680e2106c4672594f5322381bd2ce5d9a5f255068" Mar 18 08:50:22.947632 master-0 kubenswrapper[7620]: I0318 08:50:22.947469 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:22.947632 master-0 kubenswrapper[7620]: I0318 08:50:22.947601 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:22.947869 master-0 kubenswrapper[7620]: I0318 08:50:22.947752 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:50:25.947357 master-0 kubenswrapper[7620]: I0318 08:50:25.947267 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:25.948191 master-0 kubenswrapper[7620]: I0318 08:50:25.947395 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:26.683900 master-0 kubenswrapper[7620]: I0318 08:50:26.683637 7620 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f" exitCode=0 Mar 18 08:50:27.955756 master-0 kubenswrapper[7620]: I0318 08:50:27.955658 7620 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-5g8tz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 18 08:50:27.956510 master-0 kubenswrapper[7620]: I0318 08:50:27.955782 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" podUID="c110b293-2c6b-496b-b015-23aada98cb4b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 18 08:50:28.523939 master-0 kubenswrapper[7620]: E0318 08:50:28.523729 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:28.580042 master-0 kubenswrapper[7620]: E0318 08:50:28.579938 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:50:28.706305 master-0 kubenswrapper[7620]: I0318 08:50:28.706202 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:50:28.727560 master-0 kubenswrapper[7620]: I0318 08:50:28.727492 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:50:28.769899 master-0 kubenswrapper[7620]: I0318 08:50:28.769783 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:50:28.800441 master-0 kubenswrapper[7620]: I0318 08:50:28.800362 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:50:28.806752 master-0 kubenswrapper[7620]: E0318 08:50:28.806683 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:28.947315 master-0 kubenswrapper[7620]: I0318 08:50:28.947244 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:28.947548 master-0 kubenswrapper[7620]: I0318 08:50:28.947322 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:29.265227 master-0 kubenswrapper[7620]: I0318 08:50:29.265140 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 08:50:29.265974 master-0 kubenswrapper[7620]: I0318 08:50:29.265306 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:50:29.342280 master-0 kubenswrapper[7620]: I0318 08:50:29.342067 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 08:50:29.342280 master-0 kubenswrapper[7620]: I0318 08:50:29.342135 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 08:50:29.342280 master-0 kubenswrapper[7620]: I0318 08:50:29.342252 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:50:29.348628 master-0 kubenswrapper[7620]: I0318 08:50:29.342321 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:50:29.348628 master-0 kubenswrapper[7620]: I0318 08:50:29.345655 7620 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:29.348628 master-0 kubenswrapper[7620]: I0318 08:50:29.345707 7620 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:50:29.710088 master-0 kubenswrapper[7620]: I0318 08:50:29.710028 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 08:50:29.710088 master-0 kubenswrapper[7620]: I0318 08:50:29.710081 7620 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de" exitCode=137 Mar 18 08:50:29.710907 master-0 kubenswrapper[7620]: I0318 08:50:29.710192 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:50:29.710907 master-0 kubenswrapper[7620]: I0318 08:50:29.710203 7620 scope.go:117] "RemoveContainer" containerID="9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f" Mar 18 08:50:29.716078 master-0 kubenswrapper[7620]: I0318 08:50:29.715993 7620 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d" exitCode=0 Mar 18 08:50:29.716078 master-0 kubenswrapper[7620]: I0318 08:50:29.716050 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d"} Mar 18 08:50:29.734364 master-0 kubenswrapper[7620]: I0318 08:50:29.734295 7620 scope.go:117] "RemoveContainer" containerID="a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de" Mar 18 08:50:29.756425 master-0 kubenswrapper[7620]: I0318 08:50:29.756382 7620 scope.go:117] "RemoveContainer" containerID="9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f" Mar 18 08:50:29.757221 master-0 kubenswrapper[7620]: E0318 08:50:29.757164 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f\": container with ID starting with 9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f not found: ID does not exist" containerID="9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f" Mar 18 08:50:29.757351 master-0 kubenswrapper[7620]: I0318 08:50:29.757225 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f"} err="failed to get container status \"9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f\": rpc error: code = NotFound desc = could not find container \"9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f\": container with ID starting with 9800e6635085398983100da46b5c98be777ae33c91aaadd0c04fcadcfe49593f not found: ID does not exist" Mar 18 08:50:29.757351 master-0 kubenswrapper[7620]: I0318 08:50:29.757268 7620 scope.go:117] "RemoveContainer" containerID="a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de" Mar 18 08:50:29.757978 master-0 kubenswrapper[7620]: E0318 08:50:29.757931 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de\": container with ID starting with a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de not found: ID does not exist" containerID="a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de" Mar 18 08:50:29.758092 master-0 kubenswrapper[7620]: I0318 08:50:29.757992 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de"} err="failed to get container status \"a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de\": rpc error: code = NotFound desc = could not find container \"a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de\": container with ID starting with a59e8ee01c3a8fb148407d497fd43107751c8a2b3e30b228b085568e5f8dd0de not found: ID does not exist" Mar 18 08:50:30.077234 master-0 kubenswrapper[7620]: I0318 08:50:30.077159 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:50:30.118658 master-0 kubenswrapper[7620]: I0318 08:50:30.118595 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:50:30.237554 master-0 kubenswrapper[7620]: I0318 08:50:30.237454 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 18 08:50:30.238335 master-0 kubenswrapper[7620]: I0318 08:50:30.238279 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:50:31.947098 master-0 kubenswrapper[7620]: I0318 08:50:31.947003 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:31.947098 master-0 kubenswrapper[7620]: I0318 08:50:31.947095 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:32.525243 master-0 kubenswrapper[7620]: I0318 08:50:32.525106 7620 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:33.057214 master-0 kubenswrapper[7620]: E0318 08:50:33.056984 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de358dca7193f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Killing,Message:Stopping container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:59.052409151 +0000 UTC m=+63.047190913,LastTimestamp:2026-03-18 08:49:59.052409151 +0000 UTC m=+63.047190913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:50:33.063255 master-0 kubenswrapper[7620]: E0318 08:50:33.063178 7620 projected.go:194] Error preparing data for projected volume kube-api-access-zj9rk for pod openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:50:33.063635 master-0 kubenswrapper[7620]: E0318 08:50:33.063552 7620 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:50:33.063782 master-0 kubenswrapper[7620]: E0318 08:50:33.063663 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access podName:28d2bb97-ff93-4772-96fd-318fa62e3a87 nodeName:}" failed. No retries permitted until 2026-03-18 08:50:33.563636084 +0000 UTC m=+97.558417876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access") pod "installer-2-master-0" (UID: "28d2bb97-ff93-4772-96fd-318fa62e3a87") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:50:33.063782 master-0 kubenswrapper[7620]: E0318 08:50:33.063723 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk podName:97730ec2-e6f1-4f8c-b85c-3c10623d06ce nodeName:}" failed. No retries permitted until 2026-03-18 08:50:33.563709106 +0000 UTC m=+97.558490898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zj9rk" (UniqueName: "kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk") pod "cluster-baremetal-operator-6f69995874-cf6qn" (UID: "97730ec2-e6f1-4f8c-b85c-3c10623d06ce") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:50:33.097209 master-0 kubenswrapper[7620]: E0318 08:50:33.097117 7620 projected.go:194] Error preparing data for projected volume kube-api-access-vtz82 for pod openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:50:33.097522 master-0 kubenswrapper[7620]: E0318 08:50:33.097268 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82 podName:18921497-d8ed-42d8-bf3c-a027566ebe85 nodeName:}" failed. No retries permitted until 2026-03-18 08:50:33.597229967 +0000 UTC m=+97.592011759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vtz82" (UniqueName: "kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82") pod "cluster-samples-operator-85f7577d78-swcvh" (UID: "18921497-d8ed-42d8-bf3c-a027566ebe85") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:50:33.612005 master-0 kubenswrapper[7620]: I0318 08:50:33.611890 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtz82\" (UniqueName: \"kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:50:33.612347 master-0 kubenswrapper[7620]: I0318 08:50:33.612115 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:50:33.612347 master-0 kubenswrapper[7620]: I0318 08:50:33.612223 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9rk\" (UniqueName: \"kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:50:34.947522 master-0 kubenswrapper[7620]: I0318 08:50:34.947393 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:34.947522 master-0 kubenswrapper[7620]: I0318 08:50:34.947499 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:37.771101 master-0 kubenswrapper[7620]: I0318 08:50:37.771034 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-5r5r4_07a4fd92-0fd1-4688-b2db-de615d75971e/network-operator/0.log" Mar 18 08:50:37.772107 master-0 kubenswrapper[7620]: I0318 08:50:37.771110 7620 generic.go:334] "Generic (PLEG): container finished" podID="07a4fd92-0fd1-4688-b2db-de615d75971e" containerID="20bac68a3a787cd3ab838f8bf47eee1e23fd920610fa248db61e044af450ce49" exitCode=255 Mar 18 08:50:37.947837 master-0 kubenswrapper[7620]: I0318 08:50:37.947699 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:37.947837 master-0 kubenswrapper[7620]: I0318 08:50:37.947814 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:37.955468 master-0 kubenswrapper[7620]: I0318 08:50:37.955428 7620 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-5g8tz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 18 08:50:37.955652 master-0 kubenswrapper[7620]: I0318 08:50:37.955469 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" podUID="c110b293-2c6b-496b-b015-23aada98cb4b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 18 08:50:38.524309 master-0 kubenswrapper[7620]: E0318 08:50:38.524171 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:38.808259 master-0 kubenswrapper[7620]: E0318 08:50:38.808076 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:39.788559 master-0 kubenswrapper[7620]: I0318 08:50:39.788485 7620 generic.go:334] "Generic (PLEG): container finished" podID="260c8aa5-a288-4ee8-b671-f97e90a2f39c" containerID="42ba60928089ecdd2be6dc0bf250cb571a47fd29cfa3690db6c3f8f43ab0c4ba" exitCode=0 Mar 18 08:50:40.947890 master-0 kubenswrapper[7620]: I0318 08:50:40.947724 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:40.947890 master-0 kubenswrapper[7620]: I0318 08:50:40.947846 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:42.524945 master-0 kubenswrapper[7620]: I0318 08:50:42.524792 7620 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:42.724709 master-0 kubenswrapper[7620]: E0318 08:50:42.724590 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:50:43.947651 master-0 kubenswrapper[7620]: I0318 08:50:43.947582 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:43.948378 master-0 kubenswrapper[7620]: I0318 08:50:43.947677 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:44.830697 master-0 kubenswrapper[7620]: I0318 08:50:44.830613 7620 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119" exitCode=0 Mar 18 08:50:44.832285 master-0 kubenswrapper[7620]: I0318 08:50:44.832260 7620 generic.go:334] "Generic (PLEG): container finished" podID="c110b293-2c6b-496b-b015-23aada98cb4b" containerID="851a9b4a39c1a238b36e5625cadf0309e8c60fabaa4ea940ca6a7ae0197a27fb" exitCode=0 Mar 18 08:50:44.834257 master-0 kubenswrapper[7620]: I0318 08:50:44.834171 7620 generic.go:334] "Generic (PLEG): container finished" podID="8a6ab2be-d018-4fd5-bfbb-6b88aec28663" containerID="5e84b000c1316fb6659579cb173f67777226d532d34aa25b987bd230e2ca4fb7" exitCode=0 Mar 18 08:50:46.852170 master-0 kubenswrapper[7620]: I0318 08:50:46.851981 7620 generic.go:334] "Generic (PLEG): container finished" podID="5982111d-f4c6-4335-9b40-3142758fc2bc" containerID="9375c67121087e2f83dd2c8b94c0ff17721fa9588235ead108bb8a1e451225b5" exitCode=0 Mar 18 08:50:46.947683 master-0 kubenswrapper[7620]: I0318 08:50:46.947591 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:46.948152 master-0 kubenswrapper[7620]: I0318 08:50:46.947696 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:48.525984 master-0 kubenswrapper[7620]: E0318 08:50:48.525678 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:48.810182 master-0 kubenswrapper[7620]: E0318 08:50:48.808999 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:48.869796 master-0 kubenswrapper[7620]: I0318 08:50:48.869694 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-n5vqx_16d633c5-e0aa-4fb6-83e0-a2e976334406/approver/0.log" Mar 18 08:50:48.870416 master-0 kubenswrapper[7620]: I0318 08:50:48.870346 7620 generic.go:334] "Generic (PLEG): container finished" podID="16d633c5-e0aa-4fb6-83e0-a2e976334406" containerID="9d4723f8591cc64ff0653aec9e9efb152a03ef27364e5787d1d3d8ff7d6020e4" exitCode=1 Mar 18 08:50:49.881890 master-0 kubenswrapper[7620]: I0318 08:50:49.881752 7620 generic.go:334] "Generic (PLEG): container finished" podID="573d3a02-e395-4816-963a-cd614ef53f75" containerID="f62239815e692aa3c0449919f3f1826c911a4a455ec560cd817c662d02c7a9ae" exitCode=0 Mar 18 08:50:52.948446 master-0 kubenswrapper[7620]: I0318 08:50:52.948302 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:52.948446 master-0 kubenswrapper[7620]: I0318 08:50:52.948436 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:53.918655 master-0 kubenswrapper[7620]: I0318 08:50:53.918550 7620 generic.go:334] "Generic (PLEG): container finished" podID="e2ade7e6-cecd-4e98-8f85-ea8219303d75" containerID="77402342b68e7cb4ec7ebd972b9ac7442e45f3236ab9cfbb373363dfbf591b0c" exitCode=0 Mar 18 08:50:54.638667 master-0 kubenswrapper[7620]: I0318 08:50:54.638554 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:54.638667 master-0 kubenswrapper[7620]: I0318 08:50:54.638665 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:55.947845 master-0 kubenswrapper[7620]: I0318 08:50:55.947721 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:55.948745 master-0 kubenswrapper[7620]: I0318 08:50:55.947827 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:56.497443 master-0 kubenswrapper[7620]: I0318 08:50:56.497372 7620 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-f4jvq container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 18 08:50:56.497955 master-0 kubenswrapper[7620]: I0318 08:50:56.497914 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" podUID="939efa41-8f40-4f91-bee4-0425aead9760" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 18 08:50:57.639736 master-0 kubenswrapper[7620]: I0318 08:50:57.639638 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:57.640601 master-0 kubenswrapper[7620]: I0318 08:50:57.639745 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:57.839528 master-0 kubenswrapper[7620]: E0318 08:50:57.839422 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:50:58.526227 master-0 kubenswrapper[7620]: E0318 08:50:58.526163 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Mar 18 08:50:58.526900 master-0 kubenswrapper[7620]: I0318 08:50:58.526846 7620 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 08:50:58.810267 master-0 kubenswrapper[7620]: E0318 08:50:58.810160 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:58.810267 master-0 kubenswrapper[7620]: E0318 08:50:58.810218 7620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 08:50:58.947375 master-0 kubenswrapper[7620]: I0318 08:50:58.947272 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:50:58.947588 master-0 kubenswrapper[7620]: I0318 08:50:58.947402 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:50:58.962236 master-0 kubenswrapper[7620]: I0318 08:50:58.962166 7620 generic.go:334] "Generic (PLEG): container finished" podID="ec11012b-536a-422f-afc4-d2d0fd4b67fb" containerID="b192c774019baaa7e62a2cf9e287d09d05206c3fc1c24b73874462681a8ac04f" exitCode=0 Mar 18 08:50:59.089334 master-0 kubenswrapper[7620]: I0318 08:50:59.089088 7620 status_manager.go:851] "Failed to get status for pod" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" pod="openshift-marketplace/certified-operators-vgplg" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods certified-operators-vgplg)" Mar 18 08:50:59.913935 master-0 kubenswrapper[7620]: E0318 08:50:59.913768 7620 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:50:59.913935 master-0 kubenswrapper[7620]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-744f9dbf77-v8ft8_openshift-cloud-credential-operator_e64ea71a-1e89-409a-9607-4d3cea093643_0(ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-v8ft8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304" Netns:"/var/run/netns/b30911de-7d66-4c0a-944a-efb389b1b974" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-744f9dbf77-v8ft8;K8S_POD_INFRA_CONTAINER_ID=ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304;K8S_POD_UID=e64ea71a-1e89-409a-9607-4d3cea093643" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8/e64ea71a-1e89-409a-9607-4d3cea093643]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-744f9dbf77-v8ft8 in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-744f9dbf77-v8ft8 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-744f9dbf77-v8ft8?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:59.913935 master-0 kubenswrapper[7620]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:59.913935 master-0 kubenswrapper[7620]: > Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: E0318 08:50:59.913998 7620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-744f9dbf77-v8ft8_openshift-cloud-credential-operator_e64ea71a-1e89-409a-9607-4d3cea093643_0(ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-v8ft8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304" Netns:"/var/run/netns/b30911de-7d66-4c0a-944a-efb389b1b974" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-744f9dbf77-v8ft8;K8S_POD_INFRA_CONTAINER_ID=ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304;K8S_POD_UID=e64ea71a-1e89-409a-9607-4d3cea093643" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8/e64ea71a-1e89-409a-9607-4d3cea093643]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-744f9dbf77-v8ft8 in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-744f9dbf77-v8ft8 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-744f9dbf77-v8ft8?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: E0318 08:50:59.914061 7620 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-744f9dbf77-v8ft8_openshift-cloud-credential-operator_e64ea71a-1e89-409a-9607-4d3cea093643_0(ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-v8ft8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304" Netns:"/var/run/netns/b30911de-7d66-4c0a-944a-efb389b1b974" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-744f9dbf77-v8ft8;K8S_POD_INFRA_CONTAINER_ID=ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304;K8S_POD_UID=e64ea71a-1e89-409a-9607-4d3cea093643" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8/e64ea71a-1e89-409a-9607-4d3cea093643]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-744f9dbf77-v8ft8 in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-744f9dbf77-v8ft8 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-744f9dbf77-v8ft8?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:50:59.914437 master-0 kubenswrapper[7620]: E0318 08:50:59.914197 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cloud-credential-operator-744f9dbf77-v8ft8_openshift-cloud-credential-operator(e64ea71a-1e89-409a-9607-4d3cea093643)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cloud-credential-operator-744f9dbf77-v8ft8_openshift-cloud-credential-operator(e64ea71a-1e89-409a-9607-4d3cea093643)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-744f9dbf77-v8ft8_openshift-cloud-credential-operator_e64ea71a-1e89-409a-9607-4d3cea093643_0(ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-v8ft8 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304\\\" Netns:\\\"/var/run/netns/b30911de-7d66-4c0a-944a-efb389b1b974\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-744f9dbf77-v8ft8;K8S_POD_INFRA_CONTAINER_ID=ceb4914053b65e3afabcb75b860d6c36f5610de7f2fcbcde6b76e6ad8be6f304;K8S_POD_UID=e64ea71a-1e89-409a-9607-4d3cea093643\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8/e64ea71a-1e89-409a-9607-4d3cea093643]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-744f9dbf77-v8ft8 in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-744f9dbf77-v8ft8 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-744f9dbf77-v8ft8?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" podUID="e64ea71a-1e89-409a-9607-4d3cea093643" Mar 18 08:50:59.981055 master-0 kubenswrapper[7620]: I0318 08:50:59.980721 7620 generic.go:334] "Generic (PLEG): container finished" podID="b0280499-8277-46f0-bd8c-058a47a99e19" containerID="76b00b2da24613bfa7eda95194ecd9d40e69d00311f7e279f85c5936ce0d7e4d" exitCode=0 Mar 18 08:50:59.981055 master-0 kubenswrapper[7620]: I0318 08:50:59.980822 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:50:59.981420 master-0 kubenswrapper[7620]: I0318 08:50:59.981244 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 08:51:00.639559 master-0 kubenswrapper[7620]: I0318 08:51:00.639444 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:00.639559 master-0 kubenswrapper[7620]: I0318 08:51:00.639556 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:01.948088 master-0 kubenswrapper[7620]: I0318 08:51:01.947966 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:01.948954 master-0 kubenswrapper[7620]: I0318 08:51:01.948112 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:04.015167 master-0 kubenswrapper[7620]: I0318 08:51:04.015039 7620 generic.go:334] "Generic (PLEG): container finished" podID="fcf89a76-7a94-46d3-853e-68e986563764" containerID="cc2fad03c96d37b754988a128065f6939d46f7a48a89eb78a7b395dfd2147290" exitCode=0 Mar 18 08:51:04.242542 master-0 kubenswrapper[7620]: E0318 08:51:04.242445 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:51:04.243093 master-0 kubenswrapper[7620]: E0318 08:51:04.242716 7620 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Mar 18 08:51:04.243093 master-0 kubenswrapper[7620]: I0318 08:51:04.242750 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 08:51:04.243769 master-0 kubenswrapper[7620]: I0318 08:51:04.243708 7620 scope.go:117] "RemoveContainer" containerID="851a9b4a39c1a238b36e5625cadf0309e8c60fabaa4ea940ca6a7ae0197a27fb" Mar 18 08:51:04.268720 master-0 kubenswrapper[7620]: I0318 08:51:04.268569 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:51:04.947267 master-0 kubenswrapper[7620]: I0318 08:51:04.947159 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:04.947267 master-0 kubenswrapper[7620]: I0318 08:51:04.947239 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:07.060540 master-0 kubenswrapper[7620]: E0318 08:51:07.060287 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de358dca9b7e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:59.052580836 +0000 UTC m=+63.047362588,LastTimestamp:2026-03-18 08:49:59.052580836 +0000 UTC m=+63.047362588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:51:07.616649 master-0 kubenswrapper[7620]: E0318 08:51:07.616581 7620 projected.go:194] Error preparing data for projected volume kube-api-access-vtz82 for pod openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:07.616913 master-0 kubenswrapper[7620]: E0318 08:51:07.616692 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82 podName:18921497-d8ed-42d8-bf3c-a027566ebe85 nodeName:}" failed. No retries permitted until 2026-03-18 08:51:08.616666236 +0000 UTC m=+132.611447998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vtz82" (UniqueName: "kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82") pod "cluster-samples-operator-85f7577d78-swcvh" (UID: "18921497-d8ed-42d8-bf3c-a027566ebe85") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:07.616913 master-0 kubenswrapper[7620]: E0318 08:51:07.616684 7620 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:07.616913 master-0 kubenswrapper[7620]: E0318 08:51:07.616761 7620 projected.go:194] Error preparing data for projected volume kube-api-access-zj9rk for pod openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:07.616913 master-0 kubenswrapper[7620]: E0318 08:51:07.616833 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access podName:28d2bb97-ff93-4772-96fd-318fa62e3a87 nodeName:}" failed. No retries permitted until 2026-03-18 08:51:08.61680086 +0000 UTC m=+132.611582612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access") pod "installer-2-master-0" (UID: "28d2bb97-ff93-4772-96fd-318fa62e3a87") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:07.617068 master-0 kubenswrapper[7620]: E0318 08:51:07.616943 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk podName:97730ec2-e6f1-4f8c-b85c-3c10623d06ce nodeName:}" failed. No retries permitted until 2026-03-18 08:51:08.616908553 +0000 UTC m=+132.611690325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zj9rk" (UniqueName: "kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk") pod "cluster-baremetal-operator-6f69995874-cf6qn" (UID: "97730ec2-e6f1-4f8c-b85c-3c10623d06ce") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:07.948373 master-0 kubenswrapper[7620]: I0318 08:51:07.948166 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:07.948373 master-0 kubenswrapper[7620]: I0318 08:51:07.948265 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:08.528253 master-0 kubenswrapper[7620]: E0318 08:51:08.528140 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 08:51:08.690760 master-0 kubenswrapper[7620]: I0318 08:51:08.690685 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9rk\" (UniqueName: \"kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:51:08.691955 master-0 kubenswrapper[7620]: I0318 08:51:08.691459 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtz82\" (UniqueName: \"kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:51:08.692328 master-0 kubenswrapper[7620]: I0318 08:51:08.692278 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:51:10.062963 master-0 kubenswrapper[7620]: I0318 08:51:10.062812 7620 generic.go:334] "Generic (PLEG): container finished" podID="939efa41-8f40-4f91-bee4-0425aead9760" containerID="c7bdc6ef2980045954ec06270159082d9f28baec29275922530ef4e26552cf99" exitCode=0 Mar 18 08:51:10.947660 master-0 kubenswrapper[7620]: I0318 08:51:10.947562 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:10.947660 master-0 kubenswrapper[7620]: I0318 08:51:10.947645 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:13.623977 master-0 kubenswrapper[7620]: E0318 08:51:13.623818 7620 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.381s" Mar 18 08:51:13.623977 master-0 kubenswrapper[7620]: I0318 08:51:13.623964 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:51:13.625477 master-0 kubenswrapper[7620]: I0318 08:51:13.625373 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"8d498d3ba632abf0251e7798cf27060435ed49cd813b6245a191fca82502b1e9"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 08:51:13.625571 master-0 kubenswrapper[7620]: I0318 08:51:13.625508 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" containerID="cri-o://8d498d3ba632abf0251e7798cf27060435ed49cd813b6245a191fca82502b1e9" gracePeriod=30 Mar 18 08:51:13.625765 master-0 kubenswrapper[7620]: I0318 08:51:13.625698 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:13.625918 master-0 kubenswrapper[7620]: I0318 08:51:13.625780 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:13.646686 master-0 kubenswrapper[7620]: I0318 08:51:13.646611 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:51:13.652542 master-0 kubenswrapper[7620]: W0318 08:51:13.652466 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode64ea71a_1e89_409a_9607_4d3cea093643.slice/crio-88d505327814e64c05d565f5816ae97892418500facf7fd5799add8d17c8b232 WatchSource:0}: Error finding container 88d505327814e64c05d565f5816ae97892418500facf7fd5799add8d17c8b232: Status 404 returned error can't find the container with id 88d505327814e64c05d565f5816ae97892418500facf7fd5799add8d17c8b232 Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.655693 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.655799 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" event={"ID":"07a4fd92-0fd1-4688-b2db-de615d75971e","Type":"ContainerDied","Data":"20bac68a3a787cd3ab838f8bf47eee1e23fd920610fa248db61e044af450ce49"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.655848 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" event={"ID":"260c8aa5-a288-4ee8-b671-f97e90a2f39c","Type":"ContainerDied","Data":"42ba60928089ecdd2be6dc0bf250cb571a47fd29cfa3690db6c3f8f43ab0c4ba"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.655924 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656121 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656158 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656189 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" event={"ID":"c110b293-2c6b-496b-b015-23aada98cb4b","Type":"ContainerDied","Data":"851a9b4a39c1a238b36e5625cadf0309e8c60fabaa4ea940ca6a7ae0197a27fb"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656219 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" event={"ID":"8a6ab2be-d018-4fd5-bfbb-6b88aec28663","Type":"ContainerDied","Data":"5e84b000c1316fb6659579cb173f67777226d532d34aa25b987bd230e2ca4fb7"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656247 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" event={"ID":"5982111d-f4c6-4335-9b40-3142758fc2bc","Type":"ContainerDied","Data":"9375c67121087e2f83dd2c8b94c0ff17721fa9588235ead108bb8a1e451225b5"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656275 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-n5vqx" event={"ID":"16d633c5-e0aa-4fb6-83e0-a2e976334406","Type":"ContainerDied","Data":"9d4723f8591cc64ff0653aec9e9efb152a03ef27364e5787d1d3d8ff7d6020e4"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656302 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerDied","Data":"f62239815e692aa3c0449919f3f1826c911a4a455ec560cd817c662d02c7a9ae"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656328 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerStarted","Data":"8d498d3ba632abf0251e7798cf27060435ed49cd813b6245a191fca82502b1e9"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656351 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" event={"ID":"e2ade7e6-cecd-4e98-8f85-ea8219303d75","Type":"ContainerDied","Data":"77402342b68e7cb4ec7ebd972b9ac7442e45f3236ab9cfbb373363dfbf591b0c"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656387 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" event={"ID":"ec11012b-536a-422f-afc4-d2d0fd4b67fb","Type":"ContainerDied","Data":"b192c774019baaa7e62a2cf9e287d09d05206c3fc1c24b73874462681a8ac04f"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656415 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656436 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656507 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a" gracePeriod=30 Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656448 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656610 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656631 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656643 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656657 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" event={"ID":"b0280499-8277-46f0-bd8c-058a47a99e19","Type":"ContainerDied","Data":"76b00b2da24613bfa7eda95194ecd9d40e69d00311f7e279f85c5936ce0d7e4d"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656675 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" event={"ID":"fcf89a76-7a94-46d3-853e-68e986563764","Type":"ContainerDied","Data":"cc2fad03c96d37b754988a128065f6939d46f7a48a89eb78a7b395dfd2147290"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656692 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" event={"ID":"c110b293-2c6b-496b-b015-23aada98cb4b","Type":"ContainerStarted","Data":"78774de99109933a7fca3fa983b5cd1b5e8ccbd9b7002603bbf20f4203af05d4"} Mar 18 08:51:13.656655 master-0 kubenswrapper[7620]: I0318 08:51:13.656707 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" event={"ID":"939efa41-8f40-4f91-bee4-0425aead9760","Type":"ContainerDied","Data":"c7bdc6ef2980045954ec06270159082d9f28baec29275922530ef4e26552cf99"} Mar 18 08:51:13.658614 master-0 kubenswrapper[7620]: I0318 08:51:13.657014 7620 scope.go:117] "RemoveContainer" containerID="b192c774019baaa7e62a2cf9e287d09d05206c3fc1c24b73874462681a8ac04f" Mar 18 08:51:13.658614 master-0 kubenswrapper[7620]: I0318 08:51:13.657628 7620 scope.go:117] "RemoveContainer" containerID="76b00b2da24613bfa7eda95194ecd9d40e69d00311f7e279f85c5936ce0d7e4d" Mar 18 08:51:13.658614 master-0 kubenswrapper[7620]: I0318 08:51:13.658268 7620 scope.go:117] "RemoveContainer" containerID="20bac68a3a787cd3ab838f8bf47eee1e23fd920610fa248db61e044af450ce49" Mar 18 08:51:13.658614 master-0 kubenswrapper[7620]: I0318 08:51:13.658369 7620 scope.go:117] "RemoveContainer" containerID="cc2fad03c96d37b754988a128065f6939d46f7a48a89eb78a7b395dfd2147290" Mar 18 08:51:13.668981 master-0 kubenswrapper[7620]: I0318 08:51:13.668937 7620 scope.go:117] "RemoveContainer" containerID="42ba60928089ecdd2be6dc0bf250cb571a47fd29cfa3690db6c3f8f43ab0c4ba" Mar 18 08:51:13.685355 master-0 kubenswrapper[7620]: I0318 08:51:13.685295 7620 scope.go:117] "RemoveContainer" containerID="9d4723f8591cc64ff0653aec9e9efb152a03ef27364e5787d1d3d8ff7d6020e4" Mar 18 08:51:13.715205 master-0 kubenswrapper[7620]: I0318 08:51:13.714708 7620 scope.go:117] "RemoveContainer" containerID="77402342b68e7cb4ec7ebd972b9ac7442e45f3236ab9cfbb373363dfbf591b0c" Mar 18 08:51:13.716472 master-0 kubenswrapper[7620]: I0318 08:51:13.716236 7620 scope.go:117] "RemoveContainer" containerID="9375c67121087e2f83dd2c8b94c0ff17721fa9588235ead108bb8a1e451225b5" Mar 18 08:51:13.718039 master-0 kubenswrapper[7620]: I0318 08:51:13.717067 7620 scope.go:117] "RemoveContainer" containerID="5e84b000c1316fb6659579cb173f67777226d532d34aa25b987bd230e2ca4fb7" Mar 18 08:51:13.723814 master-0 kubenswrapper[7620]: I0318 08:51:13.723765 7620 scope.go:117] "RemoveContainer" containerID="c7bdc6ef2980045954ec06270159082d9f28baec29275922530ef4e26552cf99" Mar 18 08:51:13.724736 master-0 kubenswrapper[7620]: I0318 08:51:13.724537 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 08:51:13.725276 master-0 kubenswrapper[7620]: I0318 08:51:13.724612 7620 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="de59a6ff-4091-4bcc-99a7-e2bd9a1d339d" Mar 18 08:51:13.732157 master-0 kubenswrapper[7620]: I0318 08:51:13.731994 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 08:51:13.732157 master-0 kubenswrapper[7620]: I0318 08:51:13.732023 7620 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="de59a6ff-4091-4bcc-99a7-e2bd9a1d339d" Mar 18 08:51:13.737214 master-0 kubenswrapper[7620]: I0318 08:51:13.737117 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" podStartSLOduration=78.049312906 podStartE2EDuration="1m26.737092029s" podCreationTimestamp="2026-03-18 08:49:47 +0000 UTC" firstStartedPulling="2026-03-18 08:49:48.220526993 +0000 UTC m=+52.215308745" lastFinishedPulling="2026-03-18 08:49:56.908306076 +0000 UTC m=+60.903087868" observedRunningTime="2026-03-18 08:51:13.633523105 +0000 UTC m=+137.628304897" watchObservedRunningTime="2026-03-18 08:51:13.737092029 +0000 UTC m=+137.731873781" Mar 18 08:51:13.738284 master-0 kubenswrapper[7620]: I0318 08:51:13.738140 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8"] Mar 18 08:51:13.741812 master-0 kubenswrapper[7620]: I0318 08:51:13.741762 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6m4q6" podStartSLOduration=59.206902925 podStartE2EDuration="1m25.741665738s" podCreationTimestamp="2026-03-18 08:49:48 +0000 UTC" firstStartedPulling="2026-03-18 08:49:49.815001118 +0000 UTC m=+53.809782870" lastFinishedPulling="2026-03-18 08:50:16.349763921 +0000 UTC m=+80.344545683" observedRunningTime="2026-03-18 08:51:13.716439416 +0000 UTC m=+137.711221188" watchObservedRunningTime="2026-03-18 08:51:13.741665738 +0000 UTC m=+137.736447500" Mar 18 08:51:13.751013 master-0 kubenswrapper[7620]: I0318 08:51:13.748996 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 08:51:13.884684 master-0 kubenswrapper[7620]: I0318 08:51:13.884531 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ffks8" podStartSLOduration=60.80633413 podStartE2EDuration="1m24.884486171s" podCreationTimestamp="2026-03-18 08:49:49 +0000 UTC" firstStartedPulling="2026-03-18 08:49:52.876255202 +0000 UTC m=+56.871036954" lastFinishedPulling="2026-03-18 08:50:16.954407243 +0000 UTC m=+80.949188995" observedRunningTime="2026-03-18 08:51:13.879520261 +0000 UTC m=+137.874302033" watchObservedRunningTime="2026-03-18 08:51:13.884486171 +0000 UTC m=+137.879267923" Mar 18 08:51:13.906669 master-0 kubenswrapper[7620]: I0318 08:51:13.905085 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" podStartSLOduration=79.263935835 podStartE2EDuration="1m21.905064382s" podCreationTimestamp="2026-03-18 08:49:52 +0000 UTC" firstStartedPulling="2026-03-18 08:49:57.920147169 +0000 UTC m=+61.914928921" lastFinishedPulling="2026-03-18 08:50:00.561275716 +0000 UTC m=+64.556057468" observedRunningTime="2026-03-18 08:51:13.901798 +0000 UTC m=+137.896579762" watchObservedRunningTime="2026-03-18 08:51:13.905064382 +0000 UTC m=+137.899846144" Mar 18 08:51:13.960763 master-0 kubenswrapper[7620]: I0318 08:51:13.960696 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xfq8l" podStartSLOduration=59.512028677 podStartE2EDuration="1m25.960671603s" podCreationTimestamp="2026-03-18 08:49:48 +0000 UTC" firstStartedPulling="2026-03-18 08:49:49.821694383 +0000 UTC m=+53.816476135" lastFinishedPulling="2026-03-18 08:50:16.270337259 +0000 UTC m=+80.265119061" observedRunningTime="2026-03-18 08:51:13.956392332 +0000 UTC m=+137.951174104" watchObservedRunningTime="2026-03-18 08:51:13.960671603 +0000 UTC m=+137.955453375" Mar 18 08:51:14.061291 master-0 kubenswrapper[7620]: I0318 08:51:14.061031 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:51:14.064908 master-0 kubenswrapper[7620]: I0318 08:51:14.064871 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:51:14.082833 master-0 kubenswrapper[7620]: I0318 08:51:14.082756 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ck7b5" podStartSLOduration=76.60003878 podStartE2EDuration="1m26.082728939s" podCreationTimestamp="2026-03-18 08:49:48 +0000 UTC" firstStartedPulling="2026-03-18 08:49:49.414929156 +0000 UTC m=+53.409710908" lastFinishedPulling="2026-03-18 08:49:58.897619315 +0000 UTC m=+62.892401067" observedRunningTime="2026-03-18 08:51:14.080330742 +0000 UTC m=+138.075112494" watchObservedRunningTime="2026-03-18 08:51:14.082728939 +0000 UTC m=+138.077510691" Mar 18 08:51:14.120490 master-0 kubenswrapper[7620]: I0318 08:51:14.120441 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" event={"ID":"e64ea71a-1e89-409a-9607-4d3cea093643","Type":"ContainerStarted","Data":"ecabaca7d772e90b751458e3ef4529fe909835463a092ff610728bd6847d7351"} Mar 18 08:51:14.120558 master-0 kubenswrapper[7620]: I0318 08:51:14.120509 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" event={"ID":"e64ea71a-1e89-409a-9607-4d3cea093643","Type":"ContainerStarted","Data":"88d505327814e64c05d565f5816ae97892418500facf7fd5799add8d17c8b232"} Mar 18 08:51:14.131388 master-0 kubenswrapper[7620]: I0318 08:51:14.131331 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-n5vqx_16d633c5-e0aa-4fb6-83e0-a2e976334406/approver/0.log" Mar 18 08:51:14.131935 master-0 kubenswrapper[7620]: I0318 08:51:14.131873 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-n5vqx" event={"ID":"16d633c5-e0aa-4fb6-83e0-a2e976334406","Type":"ContainerStarted","Data":"fc1e7d5ba53f64b05a03f60a1cf7fc1f9339f4be3d65c717cb0541eb9f2e16d3"} Mar 18 08:51:14.143119 master-0 kubenswrapper[7620]: I0318 08:51:14.142976 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" event={"ID":"fcf89a76-7a94-46d3-853e-68e986563764","Type":"ContainerStarted","Data":"1a99a7be927203c93143077f6ac59e348e0101bd76170877fa1e8759ceb3d8f9"} Mar 18 08:51:14.170294 master-0 kubenswrapper[7620]: I0318 08:51:14.170246 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" event={"ID":"ec11012b-536a-422f-afc4-d2d0fd4b67fb","Type":"ContainerStarted","Data":"ed91c991c8d06bc267cbc835f1be40f32f245149c73511f9e2e88cfaaffae218"} Mar 18 08:51:14.172511 master-0 kubenswrapper[7620]: I0318 08:51:14.172414 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vgplg" podStartSLOduration=64.824210089 podStartE2EDuration="1m24.172392801s" podCreationTimestamp="2026-03-18 08:49:50 +0000 UTC" firstStartedPulling="2026-03-18 08:49:56.921381545 +0000 UTC m=+60.916163337" lastFinishedPulling="2026-03-18 08:50:16.269564307 +0000 UTC m=+80.264346049" observedRunningTime="2026-03-18 08:51:14.17198004 +0000 UTC m=+138.166761812" watchObservedRunningTime="2026-03-18 08:51:14.172392801 +0000 UTC m=+138.167174553" Mar 18 08:51:14.185922 master-0 kubenswrapper[7620]: I0318 08:51:14.185883 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" event={"ID":"b0280499-8277-46f0-bd8c-058a47a99e19","Type":"ContainerStarted","Data":"92584f8f3c9d073ab599e8dc246a1d7436481e759aefcd238804d56f90dcfbee"} Mar 18 08:51:14.201573 master-0 kubenswrapper[7620]: I0318 08:51:14.201525 7620 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a" exitCode=2 Mar 18 08:51:14.201723 master-0 kubenswrapper[7620]: I0318 08:51:14.201647 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a"} Mar 18 08:51:14.201794 master-0 kubenswrapper[7620]: I0318 08:51:14.201767 7620 scope.go:117] "RemoveContainer" containerID="6be6b0de4a5d0386d8a94651962cc0001d3124e6eb513e3b68435d030ea24841" Mar 18 08:51:14.261962 master-0 kubenswrapper[7620]: I0318 08:51:14.254615 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace4267e-c38d-46dd-9de6-c23339729a8b" path="/var/lib/kubelet/pods/ace4267e-c38d-46dd-9de6-c23339729a8b/volumes" Mar 18 08:51:14.344755 master-0 kubenswrapper[7620]: I0318 08:51:14.343368 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": read tcp 10.128.0.2:40144->10.128.0.14:8443: read: connection reset by peer" start-of-body= Mar 18 08:51:14.344755 master-0 kubenswrapper[7620]: I0318 08:51:14.344095 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": read tcp 10.128.0.2:40144->10.128.0.14:8443: read: connection reset by peer" Mar 18 08:51:14.417522 master-0 kubenswrapper[7620]: I0318 08:51:14.417244 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.417226955 podStartE2EDuration="1.417226955s" podCreationTimestamp="2026-03-18 08:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:51:14.415033693 +0000 UTC m=+138.409815455" watchObservedRunningTime="2026-03-18 08:51:14.417226955 +0000 UTC m=+138.412008707" Mar 18 08:51:15.216572 master-0 kubenswrapper[7620]: I0318 08:51:15.216523 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" event={"ID":"939efa41-8f40-4f91-bee4-0425aead9760","Type":"ContainerStarted","Data":"b442abd20f6cd371503b4de36fdb9b6c7f6bf49ccdc9fe0a78482e3d217b74b7"} Mar 18 08:51:15.220287 master-0 kubenswrapper[7620]: I0318 08:51:15.220240 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e"} Mar 18 08:51:15.226955 master-0 kubenswrapper[7620]: I0318 08:51:15.226128 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" event={"ID":"5982111d-f4c6-4335-9b40-3142758fc2bc","Type":"ContainerStarted","Data":"cf8242ef15f7147a08957517fba554f899e093014ca913ade20bd064e85b52fa"} Mar 18 08:51:15.231792 master-0 kubenswrapper[7620]: I0318 08:51:15.231738 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" event={"ID":"e2ade7e6-cecd-4e98-8f85-ea8219303d75","Type":"ContainerStarted","Data":"60015c429bb18ca17341066401ebe535cc36d76023a5359dca55b47fb2ca6b54"} Mar 18 08:51:15.235520 master-0 kubenswrapper[7620]: I0318 08:51:15.235451 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" event={"ID":"8a6ab2be-d018-4fd5-bfbb-6b88aec28663","Type":"ContainerStarted","Data":"9e19823b1b1ffcf1c703fbb512e19ce7bc9fb5c28f82cf8e17e7cee28a1ec8fb"} Mar 18 08:51:15.241327 master-0 kubenswrapper[7620]: I0318 08:51:15.240882 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" event={"ID":"260c8aa5-a288-4ee8-b671-f97e90a2f39c","Type":"ContainerStarted","Data":"a607f8de68cb2e243c897142ba22df209a6f51e6bc7dc8cf07ecbcaaa012ed84"} Mar 18 08:51:15.244938 master-0 kubenswrapper[7620]: I0318 08:51:15.244894 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/1.log" Mar 18 08:51:15.253021 master-0 kubenswrapper[7620]: I0318 08:51:15.252796 7620 generic.go:334] "Generic (PLEG): container finished" podID="573d3a02-e395-4816-963a-cd614ef53f75" containerID="8d498d3ba632abf0251e7798cf27060435ed49cd813b6245a191fca82502b1e9" exitCode=255 Mar 18 08:51:15.253021 master-0 kubenswrapper[7620]: I0318 08:51:15.252966 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerDied","Data":"8d498d3ba632abf0251e7798cf27060435ed49cd813b6245a191fca82502b1e9"} Mar 18 08:51:15.253435 master-0 kubenswrapper[7620]: I0318 08:51:15.253104 7620 scope.go:117] "RemoveContainer" containerID="f62239815e692aa3c0449919f3f1826c911a4a455ec560cd817c662d02c7a9ae" Mar 18 08:51:15.254203 master-0 kubenswrapper[7620]: I0318 08:51:15.254122 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:51:15.262241 master-0 kubenswrapper[7620]: I0318 08:51:15.262200 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-5r5r4_07a4fd92-0fd1-4688-b2db-de615d75971e/network-operator/0.log" Mar 18 08:51:15.262360 master-0 kubenswrapper[7620]: I0318 08:51:15.262279 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" event={"ID":"07a4fd92-0fd1-4688-b2db-de615d75971e","Type":"ContainerStarted","Data":"79f8da363f41784429b0f3e705b89b20feb9b5bafba5fef8674c12958f44ee67"} Mar 18 08:51:16.270937 master-0 kubenswrapper[7620]: I0318 08:51:16.270689 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/1.log" Mar 18 08:51:16.271736 master-0 kubenswrapper[7620]: I0318 08:51:16.271569 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerStarted","Data":"8d81a0734e052e6d6b0b5d4c93253a1f34a979d2c5960b81bcae57439a90ae9d"} Mar 18 08:51:17.233286 master-0 kubenswrapper[7620]: I0318 08:51:17.233207 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 08:51:18.639168 master-0 kubenswrapper[7620]: I0318 08:51:18.639084 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:18.639700 master-0 kubenswrapper[7620]: I0318 08:51:18.639280 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:18.729800 master-0 kubenswrapper[7620]: E0318 08:51:18.729721 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 08:51:18.912337 master-0 kubenswrapper[7620]: E0318 08:51:18.912107 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:51:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:51:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:51:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:51:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1a25ef962e8f26b0d756aa0987d45d570c0afb2e2d2507cf2fee734792b95657\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:688d991fddd7c0947af40f1c2e803a9a4ccef32b897e1bb3447e76c87ea4b753\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1746519514},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86833de447f25d1d0fc15ed5460c5068cc48b18b78b8108304c5b5fd1dff04ab\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a41181d28dfacb78bea3690c390c965912300bc666e6e31a54a9382dd0329758\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1251896539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:15e3bdacc64320529707b0286fcaaf0059f0f5eaaafacf2c4bfee4b90be77eee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:26b5f4283e14ca039e027e637271bdbf1f92abf0bc56c32b01252e8eb9a95071\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223649493},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3\\\"],\\\"sizeBytes\\\":470681292},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdd28dfe7132e19af9f013f72cf120d970bc31b6b74693af262f8d2e82a096e1\\\"],\\\"sizeBytes\\\":467235741},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:51:19.257292 master-0 kubenswrapper[7620]: I0318 08:51:19.257239 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:51:19.525837 master-0 kubenswrapper[7620]: I0318 08:51:19.525603 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:51:19.531820 master-0 kubenswrapper[7620]: I0318 08:51:19.531103 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:51:19.947561 master-0 kubenswrapper[7620]: I0318 08:51:19.947367 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:19.947561 master-0 kubenswrapper[7620]: I0318 08:51:19.947477 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:21.650942 master-0 kubenswrapper[7620]: I0318 08:51:21.642173 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:21.650942 master-0 kubenswrapper[7620]: I0318 08:51:21.642241 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:22.233224 master-0 kubenswrapper[7620]: I0318 08:51:22.233103 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 08:51:22.269273 master-0 kubenswrapper[7620]: I0318 08:51:22.269199 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 08:51:22.317289 master-0 kubenswrapper[7620]: I0318 08:51:22.317054 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" event={"ID":"e64ea71a-1e89-409a-9607-4d3cea093643","Type":"ContainerStarted","Data":"e5f92790c7654b0a25ab1a56363286dc9ded8118be054c142a750abffa16f187"} Mar 18 08:51:22.337169 master-0 kubenswrapper[7620]: I0318 08:51:22.337109 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 08:51:22.343105 master-0 kubenswrapper[7620]: I0318 08:51:22.342996 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" podStartSLOduration=78.426002692 podStartE2EDuration="1m26.342962196s" podCreationTimestamp="2026-03-18 08:49:56 +0000 UTC" firstStartedPulling="2026-03-18 08:51:14.118681004 +0000 UTC m=+138.113462766" lastFinishedPulling="2026-03-18 08:51:22.035640518 +0000 UTC m=+146.030422270" observedRunningTime="2026-03-18 08:51:22.340332112 +0000 UTC m=+146.335113864" watchObservedRunningTime="2026-03-18 08:51:22.342962196 +0000 UTC m=+146.337743958" Mar 18 08:51:22.947420 master-0 kubenswrapper[7620]: I0318 08:51:22.947325 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:22.947420 master-0 kubenswrapper[7620]: I0318 08:51:22.947414 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:24.639941 master-0 kubenswrapper[7620]: I0318 08:51:24.639793 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:24.641290 master-0 kubenswrapper[7620]: I0318 08:51:24.639957 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:24.641290 master-0 kubenswrapper[7620]: I0318 08:51:24.640038 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:51:24.641290 master-0 kubenswrapper[7620]: I0318 08:51:24.641141 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:24.641290 master-0 kubenswrapper[7620]: I0318 08:51:24.641254 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:24.641290 master-0 kubenswrapper[7620]: I0318 08:51:24.641287 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"8d81a0734e052e6d6b0b5d4c93253a1f34a979d2c5960b81bcae57439a90ae9d"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 08:51:24.641789 master-0 kubenswrapper[7620]: I0318 08:51:24.641336 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" containerID="cri-o://8d81a0734e052e6d6b0b5d4c93253a1f34a979d2c5960b81bcae57439a90ae9d" gracePeriod=30 Mar 18 08:51:25.351257 master-0 kubenswrapper[7620]: I0318 08:51:25.351159 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/2.log" Mar 18 08:51:25.351917 master-0 kubenswrapper[7620]: I0318 08:51:25.351836 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/1.log" Mar 18 08:51:25.352646 master-0 kubenswrapper[7620]: I0318 08:51:25.352563 7620 generic.go:334] "Generic (PLEG): container finished" podID="573d3a02-e395-4816-963a-cd614ef53f75" containerID="8d81a0734e052e6d6b0b5d4c93253a1f34a979d2c5960b81bcae57439a90ae9d" exitCode=255 Mar 18 08:51:25.352774 master-0 kubenswrapper[7620]: I0318 08:51:25.352652 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerDied","Data":"8d81a0734e052e6d6b0b5d4c93253a1f34a979d2c5960b81bcae57439a90ae9d"} Mar 18 08:51:25.352774 master-0 kubenswrapper[7620]: I0318 08:51:25.352709 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerStarted","Data":"8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515"} Mar 18 08:51:25.352774 master-0 kubenswrapper[7620]: I0318 08:51:25.352742 7620 scope.go:117] "RemoveContainer" containerID="8d498d3ba632abf0251e7798cf27060435ed49cd813b6245a191fca82502b1e9" Mar 18 08:51:25.353939 master-0 kubenswrapper[7620]: I0318 08:51:25.353829 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:51:26.363312 master-0 kubenswrapper[7620]: I0318 08:51:26.363229 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/2.log" Mar 18 08:51:28.912636 master-0 kubenswrapper[7620]: E0318 08:51:28.912565 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:51:28.947239 master-0 kubenswrapper[7620]: I0318 08:51:28.947185 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:28.947626 master-0 kubenswrapper[7620]: I0318 08:51:28.947572 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:29.131081 master-0 kubenswrapper[7620]: E0318 08:51:29.131001 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 08:51:29.262309 master-0 kubenswrapper[7620]: I0318 08:51:29.262201 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:51:30.639218 master-0 kubenswrapper[7620]: I0318 08:51:30.639121 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:30.639999 master-0 kubenswrapper[7620]: I0318 08:51:30.639308 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:31.947942 master-0 kubenswrapper[7620]: I0318 08:51:31.947825 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:31.948645 master-0 kubenswrapper[7620]: I0318 08:51:31.948010 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:33.639027 master-0 kubenswrapper[7620]: I0318 08:51:33.638921 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:33.640137 master-0 kubenswrapper[7620]: I0318 08:51:33.639054 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:34.947810 master-0 kubenswrapper[7620]: I0318 08:51:34.947670 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:34.948590 master-0 kubenswrapper[7620]: I0318 08:51:34.947836 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:36.640053 master-0 kubenswrapper[7620]: I0318 08:51:36.639692 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:36.641509 master-0 kubenswrapper[7620]: I0318 08:51:36.640205 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:36.641509 master-0 kubenswrapper[7620]: I0318 08:51:36.640328 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:51:36.641973 master-0 kubenswrapper[7620]: I0318 08:51:36.641885 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 08:51:36.642117 master-0 kubenswrapper[7620]: I0318 08:51:36.641970 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" containerID="cri-o://8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515" gracePeriod=30 Mar 18 08:51:36.642563 master-0 kubenswrapper[7620]: I0318 08:51:36.642449 7620 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-7kfrh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Mar 18 08:51:36.643290 master-0 kubenswrapper[7620]: I0318 08:51:36.643068 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Mar 18 08:51:37.090440 master-0 kubenswrapper[7620]: E0318 08:51:37.090376 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-95bf4f4d-7kfrh_openshift-config-operator(573d3a02-e395-4816-963a-cd614ef53f75)\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" Mar 18 08:51:37.447585 master-0 kubenswrapper[7620]: I0318 08:51:37.447483 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/3.log" Mar 18 08:51:37.448632 master-0 kubenswrapper[7620]: I0318 08:51:37.448580 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/2.log" Mar 18 08:51:37.449437 master-0 kubenswrapper[7620]: I0318 08:51:37.449387 7620 generic.go:334] "Generic (PLEG): container finished" podID="573d3a02-e395-4816-963a-cd614ef53f75" containerID="8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515" exitCode=255 Mar 18 08:51:37.449543 master-0 kubenswrapper[7620]: I0318 08:51:37.449434 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerDied","Data":"8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515"} Mar 18 08:51:37.449543 master-0 kubenswrapper[7620]: I0318 08:51:37.449509 7620 scope.go:117] "RemoveContainer" containerID="8d81a0734e052e6d6b0b5d4c93253a1f34a979d2c5960b81bcae57439a90ae9d" Mar 18 08:51:37.450601 master-0 kubenswrapper[7620]: I0318 08:51:37.450545 7620 scope.go:117] "RemoveContainer" containerID="8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515" Mar 18 08:51:37.450978 master-0 kubenswrapper[7620]: E0318 08:51:37.450926 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-95bf4f4d-7kfrh_openshift-config-operator(573d3a02-e395-4816-963a-cd614ef53f75)\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" Mar 18 08:51:37.957357 master-0 kubenswrapper[7620]: I0318 08:51:37.957255 7620 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-5g8tz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 18 08:51:37.958152 master-0 kubenswrapper[7620]: I0318 08:51:37.957358 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" podUID="c110b293-2c6b-496b-b015-23aada98cb4b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 18 08:51:38.459591 master-0 kubenswrapper[7620]: I0318 08:51:38.459500 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/3.log" Mar 18 08:51:38.913354 master-0 kubenswrapper[7620]: E0318 08:51:38.913257 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:51:39.932565 master-0 kubenswrapper[7620]: E0318 08:51:39.932453 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 08:51:41.063141 master-0 kubenswrapper[7620]: E0318 08:51:41.062947 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{dns-default-ck7b5.189de358f1d0a9f6 openshift-dns 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-dns,Name:dns-default-ck7b5,UID:b35ab145-16a7-4ef1-86e8-0afb6ff469fd,APIVersion:v1,ResourceVersion:7403,FieldPath:spec.containers{dns},},Reason:Created,Message:Created container: dns,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:59.40745471 +0000 UTC m=+63.402236462,LastTimestamp:2026-03-18 08:49:59.40745471 +0000 UTC m=+63.402236462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:51:42.694514 master-0 kubenswrapper[7620]: E0318 08:51:42.694429 7620 projected.go:194] Error preparing data for projected volume kube-api-access-zj9rk for pod openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:42.695400 master-0 kubenswrapper[7620]: E0318 08:51:42.694620 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk podName:97730ec2-e6f1-4f8c-b85c-3c10623d06ce nodeName:}" failed. No retries permitted until 2026-03-18 08:51:44.694579072 +0000 UTC m=+168.689360864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zj9rk" (UniqueName: "kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk") pod "cluster-baremetal-operator-6f69995874-cf6qn" (UID: "97730ec2-e6f1-4f8c-b85c-3c10623d06ce") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:42.696782 master-0 kubenswrapper[7620]: E0318 08:51:42.696735 7620 projected.go:194] Error preparing data for projected volume kube-api-access-vtz82 for pod openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:42.696843 master-0 kubenswrapper[7620]: E0318 08:51:42.696819 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82 podName:18921497-d8ed-42d8-bf3c-a027566ebe85 nodeName:}" failed. No retries permitted until 2026-03-18 08:51:44.696804294 +0000 UTC m=+168.691586056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vtz82" (UniqueName: "kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82") pod "cluster-samples-operator-85f7577d78-swcvh" (UID: "18921497-d8ed-42d8-bf3c-a027566ebe85") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:42.697422 master-0 kubenswrapper[7620]: E0318 08:51:42.697350 7620 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:42.697565 master-0 kubenswrapper[7620]: E0318 08:51:42.697528 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access podName:28d2bb97-ff93-4772-96fd-318fa62e3a87 nodeName:}" failed. No retries permitted until 2026-03-18 08:51:44.697470633 +0000 UTC m=+168.692252425 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access") pod "installer-2-master-0" (UID: "28d2bb97-ff93-4772-96fd-318fa62e3a87") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 08:51:44.738717 master-0 kubenswrapper[7620]: I0318 08:51:44.738665 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:51:44.739298 master-0 kubenswrapper[7620]: I0318 08:51:44.738728 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9rk\" (UniqueName: \"kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:51:44.739298 master-0 kubenswrapper[7620]: I0318 08:51:44.738779 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtz82\" (UniqueName: \"kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:51:47.809807 master-0 kubenswrapper[7620]: I0318 08:51:47.809723 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access\") pod \"installer-2-master-0\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:51:47.816588 master-0 kubenswrapper[7620]: I0318 08:51:47.816501 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9rk\" (UniqueName: \"kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:51:47.818444 master-0 kubenswrapper[7620]: I0318 08:51:47.818249 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtz82\" (UniqueName: \"kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:51:47.838043 master-0 kubenswrapper[7620]: I0318 08:51:47.837181 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:51:47.936566 master-0 kubenswrapper[7620]: I0318 08:51:47.936511 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 08:51:48.040182 master-0 kubenswrapper[7620]: I0318 08:51:48.040096 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtxm4" Mar 18 08:51:48.050689 master-0 kubenswrapper[7620]: I0318 08:51:48.049231 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 08:51:48.223994 master-0 kubenswrapper[7620]: I0318 08:51:48.223719 7620 scope.go:117] "RemoveContainer" containerID="8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515" Mar 18 08:51:48.223994 master-0 kubenswrapper[7620]: E0318 08:51:48.223932 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-95bf4f4d-7kfrh_openshift-config-operator(573d3a02-e395-4816-963a-cd614ef53f75)\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" podUID="573d3a02-e395-4816-963a-cd614ef53f75" Mar 18 08:51:48.282405 master-0 kubenswrapper[7620]: I0318 08:51:48.282340 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 08:51:48.534904 master-0 kubenswrapper[7620]: I0318 08:51:48.530241 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"28d2bb97-ff93-4772-96fd-318fa62e3a87","Type":"ContainerStarted","Data":"a0506e567232af6a1d871e8bdc27ad4000f63b8618b9625c8e1c8682da50383b"} Mar 18 08:51:48.574927 master-0 kubenswrapper[7620]: I0318 08:51:48.566938 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn"] Mar 18 08:51:48.574927 master-0 kubenswrapper[7620]: I0318 08:51:48.570463 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh"] Mar 18 08:51:48.577564 master-0 kubenswrapper[7620]: W0318 08:51:48.576199 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97730ec2_e6f1_4f8c_b85c_3c10623d06ce.slice/crio-27e819688a289fa256559a318b6523e53569525673491824d2f15c32bbc44e17 WatchSource:0}: Error finding container 27e819688a289fa256559a318b6523e53569525673491824d2f15c32bbc44e17: Status 404 returned error can't find the container with id 27e819688a289fa256559a318b6523e53569525673491824d2f15c32bbc44e17 Mar 18 08:51:48.956357 master-0 kubenswrapper[7620]: I0318 08:51:48.956133 7620 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-5g8tz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:51:48.956357 master-0 kubenswrapper[7620]: I0318 08:51:48.956214 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" podUID="c110b293-2c6b-496b-b015-23aada98cb4b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:51:49.407054 master-0 kubenswrapper[7620]: I0318 08:51:49.406975 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m4q6"] Mar 18 08:51:49.407694 master-0 kubenswrapper[7620]: I0318 08:51:49.407587 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6m4q6" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="registry-server" containerID="cri-o://b9b14f7f666700509a5494b067b2a60b7cb42e06b28d07a9c4945f482a1d974b" gracePeriod=2 Mar 18 08:51:49.520810 master-0 kubenswrapper[7620]: E0318 08:51:49.520761 7620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833eeb49_a463_432a_a684_a27c66ecae7d.slice/crio-b9b14f7f666700509a5494b067b2a60b7cb42e06b28d07a9c4945f482a1d974b.scope\": RecentStats: unable to find data in memory cache]" Mar 18 08:51:49.541516 master-0 kubenswrapper[7620]: I0318 08:51:49.541413 7620 generic.go:334] "Generic (PLEG): container finished" podID="833eeb49-a463-432a-a684-a27c66ecae7d" containerID="b9b14f7f666700509a5494b067b2a60b7cb42e06b28d07a9c4945f482a1d974b" exitCode=0 Mar 18 08:51:49.541516 master-0 kubenswrapper[7620]: I0318 08:51:49.541475 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m4q6" event={"ID":"833eeb49-a463-432a-a684-a27c66ecae7d","Type":"ContainerDied","Data":"b9b14f7f666700509a5494b067b2a60b7cb42e06b28d07a9c4945f482a1d974b"} Mar 18 08:51:49.543517 master-0 kubenswrapper[7620]: I0318 08:51:49.543419 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" event={"ID":"97730ec2-e6f1-4f8c-b85c-3c10623d06ce","Type":"ContainerStarted","Data":"27e819688a289fa256559a318b6523e53569525673491824d2f15c32bbc44e17"} Mar 18 08:51:49.545362 master-0 kubenswrapper[7620]: I0318 08:51:49.545315 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" event={"ID":"18921497-d8ed-42d8-bf3c-a027566ebe85","Type":"ContainerStarted","Data":"e8459c0c82ddc5a6e864e94a80eda98d197ebe97363ec23c2d9041a3ae2c51bb"} Mar 18 08:51:49.549206 master-0 kubenswrapper[7620]: I0318 08:51:49.547975 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"28d2bb97-ff93-4772-96fd-318fa62e3a87","Type":"ContainerStarted","Data":"cf9e9bddbf3499401835a2ff896142cd9409d0448e901ff2faa3c5fb21f85146"} Mar 18 08:51:49.569655 master-0 kubenswrapper[7620]: I0318 08:51:49.569571 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=111.569542689 podStartE2EDuration="1m51.569542689s" podCreationTimestamp="2026-03-18 08:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:51:49.565896197 +0000 UTC m=+173.560677969" watchObservedRunningTime="2026-03-18 08:51:49.569542689 +0000 UTC m=+173.564324461" Mar 18 08:51:49.895305 master-0 kubenswrapper[7620]: I0318 08:51:49.895267 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:51:50.002299 master-0 kubenswrapper[7620]: I0318 08:51:50.002226 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfq8l"] Mar 18 08:51:50.002798 master-0 kubenswrapper[7620]: I0318 08:51:50.002608 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xfq8l" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="registry-server" containerID="cri-o://cf6929903f6267ae579fcfd9810a3ba405d86b38c45e7d904736f156b99ba651" gracePeriod=2 Mar 18 08:51:50.025859 master-0 kubenswrapper[7620]: I0318 08:51:50.025804 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb6ns\" (UniqueName: \"kubernetes.io/projected/833eeb49-a463-432a-a684-a27c66ecae7d-kube-api-access-gb6ns\") pod \"833eeb49-a463-432a-a684-a27c66ecae7d\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " Mar 18 08:51:50.026021 master-0 kubenswrapper[7620]: I0318 08:51:50.026000 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-catalog-content\") pod \"833eeb49-a463-432a-a684-a27c66ecae7d\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " Mar 18 08:51:50.026107 master-0 kubenswrapper[7620]: I0318 08:51:50.026089 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-utilities\") pod \"833eeb49-a463-432a-a684-a27c66ecae7d\" (UID: \"833eeb49-a463-432a-a684-a27c66ecae7d\") " Mar 18 08:51:50.027097 master-0 kubenswrapper[7620]: I0318 08:51:50.027041 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-utilities" (OuterVolumeSpecName: "utilities") pod "833eeb49-a463-432a-a684-a27c66ecae7d" (UID: "833eeb49-a463-432a-a684-a27c66ecae7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:51:50.034908 master-0 kubenswrapper[7620]: I0318 08:51:50.034265 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833eeb49-a463-432a-a684-a27c66ecae7d-kube-api-access-gb6ns" (OuterVolumeSpecName: "kube-api-access-gb6ns") pod "833eeb49-a463-432a-a684-a27c66ecae7d" (UID: "833eeb49-a463-432a-a684-a27c66ecae7d"). InnerVolumeSpecName "kube-api-access-gb6ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:51:50.057647 master-0 kubenswrapper[7620]: I0318 08:51:50.056034 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "833eeb49-a463-432a-a684-a27c66ecae7d" (UID: "833eeb49-a463-432a-a684-a27c66ecae7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:51:50.130843 master-0 kubenswrapper[7620]: I0318 08:51:50.127573 7620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:50.130843 master-0 kubenswrapper[7620]: I0318 08:51:50.127653 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb6ns\" (UniqueName: \"kubernetes.io/projected/833eeb49-a463-432a-a684-a27c66ecae7d-kube-api-access-gb6ns\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:50.130843 master-0 kubenswrapper[7620]: I0318 08:51:50.127667 7620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833eeb49-a463-432a-a684-a27c66ecae7d-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:50.554407 master-0 kubenswrapper[7620]: I0318 08:51:50.554356 7620 generic.go:334] "Generic (PLEG): container finished" podID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerID="cf6929903f6267ae579fcfd9810a3ba405d86b38c45e7d904736f156b99ba651" exitCode=0 Mar 18 08:51:50.554491 master-0 kubenswrapper[7620]: I0318 08:51:50.554434 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfq8l" event={"ID":"95843eb5-33bc-48e8-afc4-a0bd8c524e24","Type":"ContainerDied","Data":"cf6929903f6267ae579fcfd9810a3ba405d86b38c45e7d904736f156b99ba651"} Mar 18 08:51:50.556609 master-0 kubenswrapper[7620]: I0318 08:51:50.556573 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6m4q6" event={"ID":"833eeb49-a463-432a-a684-a27c66ecae7d","Type":"ContainerDied","Data":"2faccda4af6f07d470c0a6a5d3b97da84b97a7597f4e71f78d12a05ba633ee32"} Mar 18 08:51:50.556730 master-0 kubenswrapper[7620]: I0318 08:51:50.556634 7620 scope.go:117] "RemoveContainer" containerID="b9b14f7f666700509a5494b067b2a60b7cb42e06b28d07a9c4945f482a1d974b" Mar 18 08:51:50.556730 master-0 kubenswrapper[7620]: I0318 08:51:50.556718 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6m4q6" Mar 18 08:51:50.574757 master-0 kubenswrapper[7620]: I0318 08:51:50.574681 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m4q6"] Mar 18 08:51:50.591047 master-0 kubenswrapper[7620]: I0318 08:51:50.589295 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6m4q6"] Mar 18 08:51:51.392933 master-0 kubenswrapper[7620]: I0318 08:51:51.392648 7620 scope.go:117] "RemoveContainer" containerID="85878e2d9501d02753146dd527d49eca7a595cbe551c93b013706469d444a4fe" Mar 18 08:51:51.456122 master-0 kubenswrapper[7620]: I0318 08:51:51.455986 7620 scope.go:117] "RemoveContainer" containerID="f950fd1dfcd2c46d560ce00f1e2b44e70601dab057e70cf84c3cbc718a9920c0" Mar 18 08:51:51.475742 master-0 kubenswrapper[7620]: I0318 08:51:51.475700 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:51:51.534800 master-0 kubenswrapper[7620]: E0318 08:51:51.534753 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 08:51:51.582239 master-0 kubenswrapper[7620]: I0318 08:51:51.582144 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/0.log" Mar 18 08:51:51.582239 master-0 kubenswrapper[7620]: I0318 08:51:51.582187 7620 generic.go:334] "Generic (PLEG): container finished" podID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" containerID="e63c5c1d709e6609cc982cf30b568c18af00671995969feb6d602b6e7ea5ee6b" exitCode=1 Mar 18 08:51:51.582239 master-0 kubenswrapper[7620]: I0318 08:51:51.582239 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerDied","Data":"e63c5c1d709e6609cc982cf30b568c18af00671995969feb6d602b6e7ea5ee6b"} Mar 18 08:51:51.582685 master-0 kubenswrapper[7620]: I0318 08:51:51.582661 7620 scope.go:117] "RemoveContainer" containerID="e63c5c1d709e6609cc982cf30b568c18af00671995969feb6d602b6e7ea5ee6b" Mar 18 08:51:51.587326 master-0 kubenswrapper[7620]: I0318 08:51:51.584492 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c6fb9336-3f19-4220-93ee-a5a61e26340b/installer/0.log" Mar 18 08:51:51.587326 master-0 kubenswrapper[7620]: I0318 08:51:51.584577 7620 generic.go:334] "Generic (PLEG): container finished" podID="c6fb9336-3f19-4220-93ee-a5a61e26340b" containerID="a0811de98d66913ef78505cbfb268009b3b82b021cf08be06bcac5fba5f9e228" exitCode=1 Mar 18 08:51:51.587326 master-0 kubenswrapper[7620]: I0318 08:51:51.584655 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c6fb9336-3f19-4220-93ee-a5a61e26340b","Type":"ContainerDied","Data":"a0811de98d66913ef78505cbfb268009b3b82b021cf08be06bcac5fba5f9e228"} Mar 18 08:51:51.599155 master-0 kubenswrapper[7620]: I0318 08:51:51.599095 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfq8l" Mar 18 08:51:51.599912 master-0 kubenswrapper[7620]: I0318 08:51:51.599598 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfq8l" event={"ID":"95843eb5-33bc-48e8-afc4-a0bd8c524e24","Type":"ContainerDied","Data":"8cf6cb239318c19f00c4b102b3d88701d2d35a1bad35017ce524b3c32233b02f"} Mar 18 08:51:51.599976 master-0 kubenswrapper[7620]: I0318 08:51:51.599961 7620 scope.go:117] "RemoveContainer" containerID="cf6929903f6267ae579fcfd9810a3ba405d86b38c45e7d904736f156b99ba651" Mar 18 08:51:51.618235 master-0 kubenswrapper[7620]: I0318 08:51:51.618197 7620 scope.go:117] "RemoveContainer" containerID="4d5c18186f643b1a4f079e60d0bd9e03dcffe8e2274cd8cd7f1881659ac942b3" Mar 18 08:51:51.650685 master-0 kubenswrapper[7620]: I0318 08:51:51.650640 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-utilities\") pod \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " Mar 18 08:51:51.650911 master-0 kubenswrapper[7620]: I0318 08:51:51.650799 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j55mv\" (UniqueName: \"kubernetes.io/projected/95843eb5-33bc-48e8-afc4-a0bd8c524e24-kube-api-access-j55mv\") pod \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " Mar 18 08:51:51.650911 master-0 kubenswrapper[7620]: I0318 08:51:51.650888 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-catalog-content\") pod \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\" (UID: \"95843eb5-33bc-48e8-afc4-a0bd8c524e24\") " Mar 18 08:51:51.652440 master-0 kubenswrapper[7620]: I0318 08:51:51.652391 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-utilities" (OuterVolumeSpecName: "utilities") pod "95843eb5-33bc-48e8-afc4-a0bd8c524e24" (UID: "95843eb5-33bc-48e8-afc4-a0bd8c524e24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:51:51.657653 master-0 kubenswrapper[7620]: I0318 08:51:51.657595 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95843eb5-33bc-48e8-afc4-a0bd8c524e24-kube-api-access-j55mv" (OuterVolumeSpecName: "kube-api-access-j55mv") pod "95843eb5-33bc-48e8-afc4-a0bd8c524e24" (UID: "95843eb5-33bc-48e8-afc4-a0bd8c524e24"). InnerVolumeSpecName "kube-api-access-j55mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:51:51.661980 master-0 kubenswrapper[7620]: I0318 08:51:51.661932 7620 scope.go:117] "RemoveContainer" containerID="6db6cc72dff8a4c58675032fad1afd316f02d7468d346af6104e95e0c8d8fce4" Mar 18 08:51:51.735387 master-0 kubenswrapper[7620]: I0318 08:51:51.735330 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95843eb5-33bc-48e8-afc4-a0bd8c524e24" (UID: "95843eb5-33bc-48e8-afc4-a0bd8c524e24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:51:51.752729 master-0 kubenswrapper[7620]: I0318 08:51:51.752679 7620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:51.752729 master-0 kubenswrapper[7620]: I0318 08:51:51.752716 7620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95843eb5-33bc-48e8-afc4-a0bd8c524e24-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:51.752729 master-0 kubenswrapper[7620]: I0318 08:51:51.752727 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j55mv\" (UniqueName: \"kubernetes.io/projected/95843eb5-33bc-48e8-afc4-a0bd8c524e24-kube-api-access-j55mv\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:51.931221 master-0 kubenswrapper[7620]: I0318 08:51:51.931175 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfq8l"] Mar 18 08:51:51.945607 master-0 kubenswrapper[7620]: I0318 08:51:51.945551 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xfq8l"] Mar 18 08:51:52.198367 master-0 kubenswrapper[7620]: I0318 08:51:52.198238 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgplg"] Mar 18 08:51:52.198579 master-0 kubenswrapper[7620]: I0318 08:51:52.198491 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vgplg" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerName="registry-server" containerID="cri-o://963e77396932fd5dde20fd2229477fc2520d4deed14e4daee66a481b11a60005" gracePeriod=2 Mar 18 08:51:52.229694 master-0 kubenswrapper[7620]: I0318 08:51:52.229642 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" path="/var/lib/kubelet/pods/833eeb49-a463-432a-a684-a27c66ecae7d/volumes" Mar 18 08:51:52.230291 master-0 kubenswrapper[7620]: I0318 08:51:52.230265 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" path="/var/lib/kubelet/pods/95843eb5-33bc-48e8-afc4-a0bd8c524e24/volumes" Mar 18 08:51:52.618524 master-0 kubenswrapper[7620]: I0318 08:51:52.617320 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgplg" event={"ID":"d72cacbe-f050-4b00-b20d-6e3c800db5e3","Type":"ContainerDied","Data":"963e77396932fd5dde20fd2229477fc2520d4deed14e4daee66a481b11a60005"} Mar 18 08:51:52.618524 master-0 kubenswrapper[7620]: I0318 08:51:52.617325 7620 generic.go:334] "Generic (PLEG): container finished" podID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerID="963e77396932fd5dde20fd2229477fc2520d4deed14e4daee66a481b11a60005" exitCode=0 Mar 18 08:51:52.618524 master-0 kubenswrapper[7620]: I0318 08:51:52.617541 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgplg" event={"ID":"d72cacbe-f050-4b00-b20d-6e3c800db5e3","Type":"ContainerDied","Data":"9da72a97eb2b299f530fe3886d783b1eae63e297264297b40194bd3eb47a397a"} Mar 18 08:51:52.618524 master-0 kubenswrapper[7620]: I0318 08:51:52.617578 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9da72a97eb2b299f530fe3886d783b1eae63e297264297b40194bd3eb47a397a" Mar 18 08:51:52.626952 master-0 kubenswrapper[7620]: I0318 08:51:52.623415 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" event={"ID":"97730ec2-e6f1-4f8c-b85c-3c10623d06ce","Type":"ContainerStarted","Data":"d08bcd5ab41d5210bbeb4b9290769fd0f8272d396522ca687dfe02a080e632f3"} Mar 18 08:51:52.626952 master-0 kubenswrapper[7620]: I0318 08:51:52.623449 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" event={"ID":"97730ec2-e6f1-4f8c-b85c-3c10623d06ce","Type":"ContainerStarted","Data":"a6965c370aee0562c7dab05dd0bba9899ece7a915ae59774856223463957b6b4"} Mar 18 08:51:52.626952 master-0 kubenswrapper[7620]: I0318 08:51:52.626786 7620 generic.go:334] "Generic (PLEG): container finished" podID="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" containerID="a4d8be3eaea0cde18cce25fc2e7762bfa7a4e08c4813605594a3dbbfbfb560f1" exitCode=0 Mar 18 08:51:52.626952 master-0 kubenswrapper[7620]: I0318 08:51:52.626913 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" event={"ID":"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe","Type":"ContainerDied","Data":"a4d8be3eaea0cde18cce25fc2e7762bfa7a4e08c4813605594a3dbbfbfb560f1"} Mar 18 08:51:52.629946 master-0 kubenswrapper[7620]: I0318 08:51:52.627528 7620 scope.go:117] "RemoveContainer" containerID="a4d8be3eaea0cde18cce25fc2e7762bfa7a4e08c4813605594a3dbbfbfb560f1" Mar 18 08:51:52.633695 master-0 kubenswrapper[7620]: I0318 08:51:52.633659 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/0.log" Mar 18 08:51:52.633787 master-0 kubenswrapper[7620]: I0318 08:51:52.633737 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerStarted","Data":"458f7a943f236b1eac07ca69624114d084866d6f79f7c12e67735ee4e517390d"} Mar 18 08:51:52.638236 master-0 kubenswrapper[7620]: I0318 08:51:52.636223 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" event={"ID":"18921497-d8ed-42d8-bf3c-a027566ebe85","Type":"ContainerStarted","Data":"0b8534833193002196f997614beed09d32634424f14ca0328c753d9b37719df1"} Mar 18 08:51:52.638236 master-0 kubenswrapper[7620]: I0318 08:51:52.636262 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" event={"ID":"18921497-d8ed-42d8-bf3c-a027566ebe85","Type":"ContainerStarted","Data":"639cbc537b85215a62a989f260b86d6b406adf9f0ec8c7079c5316ff0d4e59e0"} Mar 18 08:51:52.662767 master-0 kubenswrapper[7620]: I0318 08:51:52.662721 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:51:52.679077 master-0 kubenswrapper[7620]: I0318 08:51:52.674766 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" podStartSLOduration=111.798649856 podStartE2EDuration="1m54.67473994s" podCreationTimestamp="2026-03-18 08:49:58 +0000 UTC" firstStartedPulling="2026-03-18 08:51:48.582088433 +0000 UTC m=+172.576870185" lastFinishedPulling="2026-03-18 08:51:51.458178517 +0000 UTC m=+175.452960269" observedRunningTime="2026-03-18 08:51:52.646413818 +0000 UTC m=+176.641195570" watchObservedRunningTime="2026-03-18 08:51:52.67473994 +0000 UTC m=+176.669521712" Mar 18 08:51:52.717357 master-0 kubenswrapper[7620]: I0318 08:51:52.716150 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" podStartSLOduration=111.916631455 podStartE2EDuration="1m54.716123657s" podCreationTimestamp="2026-03-18 08:49:58 +0000 UTC" firstStartedPulling="2026-03-18 08:51:48.659101337 +0000 UTC m=+172.653883089" lastFinishedPulling="2026-03-18 08:51:51.458593539 +0000 UTC m=+175.453375291" observedRunningTime="2026-03-18 08:51:52.715937452 +0000 UTC m=+176.710719214" watchObservedRunningTime="2026-03-18 08:51:52.716123657 +0000 UTC m=+176.710905429" Mar 18 08:51:52.767943 master-0 kubenswrapper[7620]: I0318 08:51:52.767910 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-catalog-content\") pod \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " Mar 18 08:51:52.768203 master-0 kubenswrapper[7620]: I0318 08:51:52.768185 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl6qt\" (UniqueName: \"kubernetes.io/projected/d72cacbe-f050-4b00-b20d-6e3c800db5e3-kube-api-access-pl6qt\") pod \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " Mar 18 08:51:52.768310 master-0 kubenswrapper[7620]: I0318 08:51:52.768293 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-utilities\") pod \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\" (UID: \"d72cacbe-f050-4b00-b20d-6e3c800db5e3\") " Mar 18 08:51:52.770167 master-0 kubenswrapper[7620]: I0318 08:51:52.770121 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-utilities" (OuterVolumeSpecName: "utilities") pod "d72cacbe-f050-4b00-b20d-6e3c800db5e3" (UID: "d72cacbe-f050-4b00-b20d-6e3c800db5e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:51:52.771751 master-0 kubenswrapper[7620]: I0318 08:51:52.771691 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72cacbe-f050-4b00-b20d-6e3c800db5e3-kube-api-access-pl6qt" (OuterVolumeSpecName: "kube-api-access-pl6qt") pod "d72cacbe-f050-4b00-b20d-6e3c800db5e3" (UID: "d72cacbe-f050-4b00-b20d-6e3c800db5e3"). InnerVolumeSpecName "kube-api-access-pl6qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:51:52.833440 master-0 kubenswrapper[7620]: I0318 08:51:52.833382 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d72cacbe-f050-4b00-b20d-6e3c800db5e3" (UID: "d72cacbe-f050-4b00-b20d-6e3c800db5e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:51:52.869829 master-0 kubenswrapper[7620]: I0318 08:51:52.869711 7620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:52.869829 master-0 kubenswrapper[7620]: I0318 08:51:52.869767 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl6qt\" (UniqueName: \"kubernetes.io/projected/d72cacbe-f050-4b00-b20d-6e3c800db5e3-kube-api-access-pl6qt\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:52.869829 master-0 kubenswrapper[7620]: I0318 08:51:52.869785 7620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72cacbe-f050-4b00-b20d-6e3c800db5e3-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:52.944080 master-0 kubenswrapper[7620]: I0318 08:51:52.944032 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c6fb9336-3f19-4220-93ee-a5a61e26340b/installer/0.log" Mar 18 08:51:52.944239 master-0 kubenswrapper[7620]: I0318 08:51:52.944112 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:51:53.072655 master-0 kubenswrapper[7620]: I0318 08:51:53.072230 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6fb9336-3f19-4220-93ee-a5a61e26340b-kube-api-access\") pod \"c6fb9336-3f19-4220-93ee-a5a61e26340b\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " Mar 18 08:51:53.072655 master-0 kubenswrapper[7620]: I0318 08:51:53.072351 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-kubelet-dir\") pod \"c6fb9336-3f19-4220-93ee-a5a61e26340b\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " Mar 18 08:51:53.072655 master-0 kubenswrapper[7620]: I0318 08:51:53.072389 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-var-lock\") pod \"c6fb9336-3f19-4220-93ee-a5a61e26340b\" (UID: \"c6fb9336-3f19-4220-93ee-a5a61e26340b\") " Mar 18 08:51:53.072655 master-0 kubenswrapper[7620]: I0318 08:51:53.072612 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c6fb9336-3f19-4220-93ee-a5a61e26340b" (UID: "c6fb9336-3f19-4220-93ee-a5a61e26340b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:51:53.072974 master-0 kubenswrapper[7620]: I0318 08:51:53.072725 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-var-lock" (OuterVolumeSpecName: "var-lock") pod "c6fb9336-3f19-4220-93ee-a5a61e26340b" (UID: "c6fb9336-3f19-4220-93ee-a5a61e26340b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:51:53.075189 master-0 kubenswrapper[7620]: I0318 08:51:53.075139 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6fb9336-3f19-4220-93ee-a5a61e26340b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c6fb9336-3f19-4220-93ee-a5a61e26340b" (UID: "c6fb9336-3f19-4220-93ee-a5a61e26340b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:51:53.174284 master-0 kubenswrapper[7620]: I0318 08:51:53.174144 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:53.174284 master-0 kubenswrapper[7620]: I0318 08:51:53.174201 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6fb9336-3f19-4220-93ee-a5a61e26340b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:53.174284 master-0 kubenswrapper[7620]: I0318 08:51:53.174220 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6fb9336-3f19-4220-93ee-a5a61e26340b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:53.609542 master-0 kubenswrapper[7620]: I0318 08:51:53.606117 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ffks8"] Mar 18 08:51:53.609542 master-0 kubenswrapper[7620]: I0318 08:51:53.606660 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ffks8" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="registry-server" containerID="cri-o://fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2" gracePeriod=2 Mar 18 08:51:53.681880 master-0 kubenswrapper[7620]: I0318 08:51:53.681296 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" event={"ID":"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe","Type":"ContainerStarted","Data":"75d1410d48296cb4f2446dcf35dcfdb58ad3083bc984cecb00db26ae1fc3d758"} Mar 18 08:51:53.682459 master-0 kubenswrapper[7620]: I0318 08:51:53.682297 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:51:53.693894 master-0 kubenswrapper[7620]: I0318 08:51:53.685491 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 08:51:53.693894 master-0 kubenswrapper[7620]: I0318 08:51:53.688972 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c6fb9336-3f19-4220-93ee-a5a61e26340b/installer/0.log" Mar 18 08:51:53.693894 master-0 kubenswrapper[7620]: I0318 08:51:53.689472 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c6fb9336-3f19-4220-93ee-a5a61e26340b","Type":"ContainerDied","Data":"1b597f433a55dbc7ccb00fbe5afce037857951640d297dcf4696ad9ed735151f"} Mar 18 08:51:53.693894 master-0 kubenswrapper[7620]: I0318 08:51:53.689552 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b597f433a55dbc7ccb00fbe5afce037857951640d297dcf4696ad9ed735151f" Mar 18 08:51:53.693894 master-0 kubenswrapper[7620]: I0318 08:51:53.689709 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:51:53.693894 master-0 kubenswrapper[7620]: I0318 08:51:53.690103 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgplg" Mar 18 08:51:53.784393 master-0 kubenswrapper[7620]: I0318 08:51:53.784333 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgplg"] Mar 18 08:51:53.791729 master-0 kubenswrapper[7620]: I0318 08:51:53.791677 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vgplg"] Mar 18 08:51:54.078965 master-0 kubenswrapper[7620]: I0318 08:51:54.078900 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:51:54.087266 master-0 kubenswrapper[7620]: I0318 08:51:54.087219 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qpdl\" (UniqueName: \"kubernetes.io/projected/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-kube-api-access-8qpdl\") pod \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " Mar 18 08:51:54.087266 master-0 kubenswrapper[7620]: I0318 08:51:54.087269 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-utilities\") pod \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " Mar 18 08:51:54.087485 master-0 kubenswrapper[7620]: I0318 08:51:54.087385 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-catalog-content\") pod \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\" (UID: \"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591\") " Mar 18 08:51:54.088234 master-0 kubenswrapper[7620]: I0318 08:51:54.088193 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-utilities" (OuterVolumeSpecName: "utilities") pod "d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" (UID: "d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:51:54.093476 master-0 kubenswrapper[7620]: I0318 08:51:54.093439 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-kube-api-access-8qpdl" (OuterVolumeSpecName: "kube-api-access-8qpdl") pod "d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" (UID: "d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591"). InnerVolumeSpecName "kube-api-access-8qpdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:51:54.188612 master-0 kubenswrapper[7620]: I0318 08:51:54.188518 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qpdl\" (UniqueName: \"kubernetes.io/projected/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-kube-api-access-8qpdl\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:54.188612 master-0 kubenswrapper[7620]: I0318 08:51:54.188555 7620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:54.235899 master-0 kubenswrapper[7620]: I0318 08:51:54.234452 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" (UID: "d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:51:54.235899 master-0 kubenswrapper[7620]: I0318 08:51:54.235353 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" path="/var/lib/kubelet/pods/d72cacbe-f050-4b00-b20d-6e3c800db5e3/volumes" Mar 18 08:51:54.290102 master-0 kubenswrapper[7620]: I0318 08:51:54.290029 7620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:54.699804 master-0 kubenswrapper[7620]: I0318 08:51:54.699245 7620 generic.go:334] "Generic (PLEG): container finished" podID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerID="fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2" exitCode=0 Mar 18 08:51:54.699804 master-0 kubenswrapper[7620]: I0318 08:51:54.699307 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ffks8" Mar 18 08:51:54.699804 master-0 kubenswrapper[7620]: I0318 08:51:54.699339 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffks8" event={"ID":"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591","Type":"ContainerDied","Data":"fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2"} Mar 18 08:51:54.699804 master-0 kubenswrapper[7620]: I0318 08:51:54.699412 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ffks8" event={"ID":"d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591","Type":"ContainerDied","Data":"5b0a9cb3c6ea40ca8b169ff889d974944c80451d88a25b4f11d65fd85e8f1627"} Mar 18 08:51:54.699804 master-0 kubenswrapper[7620]: I0318 08:51:54.699441 7620 scope.go:117] "RemoveContainer" containerID="fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2" Mar 18 08:51:54.719133 master-0 kubenswrapper[7620]: I0318 08:51:54.719063 7620 scope.go:117] "RemoveContainer" containerID="b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240" Mar 18 08:51:54.730347 master-0 kubenswrapper[7620]: I0318 08:51:54.730235 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ffks8"] Mar 18 08:51:54.733570 master-0 kubenswrapper[7620]: I0318 08:51:54.733519 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ffks8"] Mar 18 08:51:54.742134 master-0 kubenswrapper[7620]: I0318 08:51:54.742094 7620 scope.go:117] "RemoveContainer" containerID="8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7" Mar 18 08:51:54.764989 master-0 kubenswrapper[7620]: I0318 08:51:54.764725 7620 scope.go:117] "RemoveContainer" containerID="fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2" Mar 18 08:51:54.765465 master-0 kubenswrapper[7620]: E0318 08:51:54.765432 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2\": container with ID starting with fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2 not found: ID does not exist" containerID="fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2" Mar 18 08:51:54.765528 master-0 kubenswrapper[7620]: I0318 08:51:54.765475 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2"} err="failed to get container status \"fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2\": rpc error: code = NotFound desc = could not find container \"fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2\": container with ID starting with fb4674c30f19a2be761d144438bbee86e4760b5b15fc1581dfb44fe7af15ded2 not found: ID does not exist" Mar 18 08:51:54.765528 master-0 kubenswrapper[7620]: I0318 08:51:54.765503 7620 scope.go:117] "RemoveContainer" containerID="b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240" Mar 18 08:51:54.765924 master-0 kubenswrapper[7620]: E0318 08:51:54.765898 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240\": container with ID starting with b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240 not found: ID does not exist" containerID="b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240" Mar 18 08:51:54.766044 master-0 kubenswrapper[7620]: I0318 08:51:54.766018 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240"} err="failed to get container status \"b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240\": rpc error: code = NotFound desc = could not find container \"b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240\": container with ID starting with b56ca147a17d861c1c85d3e9046aa71c9eed9454c377ed80b925401f3d7b2240 not found: ID does not exist" Mar 18 08:51:54.766125 master-0 kubenswrapper[7620]: I0318 08:51:54.766112 7620 scope.go:117] "RemoveContainer" containerID="8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7" Mar 18 08:51:54.766738 master-0 kubenswrapper[7620]: E0318 08:51:54.766698 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7\": container with ID starting with 8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7 not found: ID does not exist" containerID="8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7" Mar 18 08:51:54.766815 master-0 kubenswrapper[7620]: I0318 08:51:54.766754 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7"} err="failed to get container status \"8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7\": rpc error: code = NotFound desc = could not find container \"8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7\": container with ID starting with 8f73c66e3d2e30883b323572b51c60a5caa86244687bef040fad895b7640bad7 not found: ID does not exist" Mar 18 08:51:55.706913 master-0 kubenswrapper[7620]: I0318 08:51:55.706846 7620 generic.go:334] "Generic (PLEG): container finished" podID="edc7f629-4288-443b-aa8e-78bc6a09c848" containerID="2816dd0a3b2639d48151bf75dfb86759dbb1c466295c4e9c83f4f4ac853eb6f8" exitCode=0 Mar 18 08:51:55.707661 master-0 kubenswrapper[7620]: I0318 08:51:55.706918 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" event={"ID":"edc7f629-4288-443b-aa8e-78bc6a09c848","Type":"ContainerDied","Data":"2816dd0a3b2639d48151bf75dfb86759dbb1c466295c4e9c83f4f4ac853eb6f8"} Mar 18 08:51:55.708379 master-0 kubenswrapper[7620]: I0318 08:51:55.708356 7620 scope.go:117] "RemoveContainer" containerID="2816dd0a3b2639d48151bf75dfb86759dbb1c466295c4e9c83f4f4ac853eb6f8" Mar 18 08:51:56.233155 master-0 kubenswrapper[7620]: I0318 08:51:56.233115 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" path="/var/lib/kubelet/pods/d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591/volumes" Mar 18 08:51:56.726064 master-0 kubenswrapper[7620]: I0318 08:51:56.725969 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" event={"ID":"edc7f629-4288-443b-aa8e-78bc6a09c848","Type":"ContainerStarted","Data":"4baf438f84441de9a2ddd79dfbe1c9dc6b19f232a4b6153cb8db1151df46918a"} Mar 18 08:51:59.224254 master-0 kubenswrapper[7620]: I0318 08:51:59.224174 7620 scope.go:117] "RemoveContainer" containerID="8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515" Mar 18 08:51:59.749767 master-0 kubenswrapper[7620]: I0318 08:51:59.749713 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/3.log" Mar 18 08:51:59.751082 master-0 kubenswrapper[7620]: I0318 08:51:59.751012 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" event={"ID":"573d3a02-e395-4816-963a-cd614ef53f75","Type":"ContainerStarted","Data":"fe131960deab61fc0118ae818774bbfd7d16124ffa51515fcf0339e5714857bc"} Mar 18 08:51:59.751330 master-0 kubenswrapper[7620]: I0318 08:51:59.751283 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:52:03.777307 master-0 kubenswrapper[7620]: I0318 08:52:03.777239 7620 generic.go:334] "Generic (PLEG): container finished" podID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerID="f95c3ae9a15c386971b5456139d5edf2668059a7f470b16505d0edd6a91106f8" exitCode=0 Mar 18 08:52:03.777307 master-0 kubenswrapper[7620]: I0318 08:52:03.777293 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" event={"ID":"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75","Type":"ContainerDied","Data":"f95c3ae9a15c386971b5456139d5edf2668059a7f470b16505d0edd6a91106f8"} Mar 18 08:52:03.777974 master-0 kubenswrapper[7620]: I0318 08:52:03.777825 7620 scope.go:117] "RemoveContainer" containerID="f95c3ae9a15c386971b5456139d5edf2668059a7f470b16505d0edd6a91106f8" Mar 18 08:52:04.791457 master-0 kubenswrapper[7620]: I0318 08:52:04.791367 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" event={"ID":"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75","Type":"ContainerStarted","Data":"c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882"} Mar 18 08:52:04.792255 master-0 kubenswrapper[7620]: I0318 08:52:04.792187 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:52:04.799999 master-0 kubenswrapper[7620]: I0318 08:52:04.799944 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 08:52:04.953385 master-0 kubenswrapper[7620]: I0318 08:52:04.953303 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 08:52:06.809150 master-0 kubenswrapper[7620]: I0318 08:52:06.809074 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-chjqr_33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/manager/0.log" Mar 18 08:52:06.809150 master-0 kubenswrapper[7620]: I0318 08:52:06.809118 7620 generic.go:334] "Generic (PLEG): container finished" podID="33a5c021-23c3-4a97-b5f3-77fd6dcba1ab" containerID="90143bd188df252a12ebaece10ff43bd805ca65e0b3a851506a5ecef442477c4" exitCode=1 Mar 18 08:52:06.810094 master-0 kubenswrapper[7620]: I0318 08:52:06.809675 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" event={"ID":"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab","Type":"ContainerDied","Data":"90143bd188df252a12ebaece10ff43bd805ca65e0b3a851506a5ecef442477c4"} Mar 18 08:52:06.810094 master-0 kubenswrapper[7620]: I0318 08:52:06.809948 7620 scope.go:117] "RemoveContainer" containerID="90143bd188df252a12ebaece10ff43bd805ca65e0b3a851506a5ecef442477c4" Mar 18 08:52:07.818801 master-0 kubenswrapper[7620]: I0318 08:52:07.818760 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-chjqr_33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/manager/0.log" Mar 18 08:52:07.819377 master-0 kubenswrapper[7620]: I0318 08:52:07.818917 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" event={"ID":"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab","Type":"ContainerStarted","Data":"93249f7db2dc0c3a5b0fe1351b49e56d1937b973c4c8c817cae063e4b26914a3"} Mar 18 08:52:07.819377 master-0 kubenswrapper[7620]: I0318 08:52:07.819309 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:52:08.828320 master-0 kubenswrapper[7620]: I0318 08:52:08.828211 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/0.log" Mar 18 08:52:08.829161 master-0 kubenswrapper[7620]: I0318 08:52:08.828667 7620 generic.go:334] "Generic (PLEG): container finished" podID="43fbd379-dd1e-4287-bd76-fd3ec51cde43" containerID="c87e465727f96804a91f8100c6f9f30efed35b12da82808b53f4872a9351ab90" exitCode=1 Mar 18 08:52:08.829161 master-0 kubenswrapper[7620]: I0318 08:52:08.828796 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" event={"ID":"43fbd379-dd1e-4287-bd76-fd3ec51cde43","Type":"ContainerDied","Data":"c87e465727f96804a91f8100c6f9f30efed35b12da82808b53f4872a9351ab90"} Mar 18 08:52:08.829738 master-0 kubenswrapper[7620]: I0318 08:52:08.829666 7620 scope.go:117] "RemoveContainer" containerID="c87e465727f96804a91f8100c6f9f30efed35b12da82808b53f4872a9351ab90" Mar 18 08:52:09.840274 master-0 kubenswrapper[7620]: I0318 08:52:09.840139 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/0.log" Mar 18 08:52:09.841236 master-0 kubenswrapper[7620]: I0318 08:52:09.841173 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" event={"ID":"43fbd379-dd1e-4287-bd76-fd3ec51cde43","Type":"ContainerStarted","Data":"55bd80bc1088dec062336fd1b1d85e5a9546eaf4e05088f85819a8147a8e19b3"} Mar 18 08:52:09.842310 master-0 kubenswrapper[7620]: I0318 08:52:09.842018 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:52:14.157072 master-0 kubenswrapper[7620]: I0318 08:52:14.156976 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-78szh"] Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157415 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="extract-content" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157446 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="extract-content" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157474 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="extract-content" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157492 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="extract-content" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157510 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="extract-utilities" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157528 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="extract-utilities" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157555 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="extract-utilities" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157571 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="extract-utilities" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157599 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ecff6b2-dbd4-4366-873b-2170d0b76c0f" containerName="installer" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157615 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ecff6b2-dbd4-4366-873b-2170d0b76c0f" containerName="installer" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157652 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edfa49b-d0e7-4324-aace-b115b41ddae0" containerName="installer" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157670 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edfa49b-d0e7-4324-aace-b115b41ddae0" containerName="installer" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157703 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="extract-content" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157719 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="extract-content" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157738 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerName="registry-server" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157753 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerName="registry-server" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157780 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="extract-utilities" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157796 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="extract-utilities" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157816 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="registry-server" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157832 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="registry-server" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157934 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6fb9336-3f19-4220-93ee-a5a61e26340b" containerName="installer" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157955 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6fb9336-3f19-4220-93ee-a5a61e26340b" containerName="installer" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.157974 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerName="extract-utilities" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.157989 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerName="extract-utilities" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.158016 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="registry-server" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.158031 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="registry-server" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.158054 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace4267e-c38d-46dd-9de6-c23339729a8b" containerName="installer" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.158073 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace4267e-c38d-46dd-9de6-c23339729a8b" containerName="installer" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: E0318 08:52:14.158095 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerName="extract-content" Mar 18 08:52:14.158071 master-0 kubenswrapper[7620]: I0318 08:52:14.158111 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerName="extract-content" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: E0318 08:52:14.158129 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="registry-server" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158145 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="registry-server" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158357 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edfa49b-d0e7-4324-aace-b115b41ddae0" containerName="installer" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158395 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c7ffb1-a1ab-4ca1-bdae-bcb09a759591" containerName="registry-server" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158423 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="833eeb49-a463-432a-a684-a27c66ecae7d" containerName="registry-server" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158441 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6fb9336-3f19-4220-93ee-a5a61e26340b" containerName="installer" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158464 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ecff6b2-dbd4-4366-873b-2170d0b76c0f" containerName="installer" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158484 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72cacbe-f050-4b00-b20d-6e3c800db5e3" containerName="registry-server" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158508 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="95843eb5-33bc-48e8-afc4-a0bd8c524e24" containerName="registry-server" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.158526 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace4267e-c38d-46dd-9de6-c23339729a8b" containerName="installer" Mar 18 08:52:14.160634 master-0 kubenswrapper[7620]: I0318 08:52:14.160243 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.161514 master-0 kubenswrapper[7620]: I0318 08:52:14.160846 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pk9z9"] Mar 18 08:52:14.165698 master-0 kubenswrapper[7620]: I0318 08:52:14.165565 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.167227 master-0 kubenswrapper[7620]: I0318 08:52:14.167178 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-z6dpv" Mar 18 08:52:14.169638 master-0 kubenswrapper[7620]: I0318 08:52:14.169588 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-6tztw" Mar 18 08:52:14.172166 master-0 kubenswrapper[7620]: I0318 08:52:14.172105 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vng9w"] Mar 18 08:52:14.174180 master-0 kubenswrapper[7620]: I0318 08:52:14.174139 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.183080 master-0 kubenswrapper[7620]: I0318 08:52:14.183018 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jg58c"] Mar 18 08:52:14.185009 master-0 kubenswrapper[7620]: I0318 08:52:14.184952 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.189637 master-0 kubenswrapper[7620]: I0318 08:52:14.189560 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pk9z9"] Mar 18 08:52:14.198324 master-0 kubenswrapper[7620]: I0318 08:52:14.198246 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-utilities\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.198529 master-0 kubenswrapper[7620]: I0318 08:52:14.198336 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-utilities\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.198529 master-0 kubenswrapper[7620]: I0318 08:52:14.198381 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfzdp\" (UniqueName: \"kubernetes.io/projected/a268d595-18c2-43a2-8ed5-eb64c76c490f-kube-api-access-hfzdp\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.198529 master-0 kubenswrapper[7620]: I0318 08:52:14.198477 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm6nf\" (UniqueName: \"kubernetes.io/projected/52e32e2d-33ab-4351-ae8a-80acd6077d70-kube-api-access-dm6nf\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.198738 master-0 kubenswrapper[7620]: I0318 08:52:14.198546 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-catalog-content\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.198738 master-0 kubenswrapper[7620]: I0318 08:52:14.198579 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-utilities\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.198738 master-0 kubenswrapper[7620]: I0318 08:52:14.198659 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-catalog-content\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.198738 master-0 kubenswrapper[7620]: I0318 08:52:14.198686 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-74fh5" Mar 18 08:52:14.199007 master-0 kubenswrapper[7620]: I0318 08:52:14.198741 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bpz6r" Mar 18 08:52:14.199007 master-0 kubenswrapper[7620]: I0318 08:52:14.198750 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djq7n\" (UniqueName: \"kubernetes.io/projected/f65344cd-8571-4a78-927f-eec46ec1af51-kube-api-access-djq7n\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.199007 master-0 kubenswrapper[7620]: I0318 08:52:14.198802 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-catalog-content\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.199007 master-0 kubenswrapper[7620]: I0318 08:52:14.198914 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lv7n\" (UniqueName: \"kubernetes.io/projected/92542f7c-182b-45a8-bbf3-00e99ba7acee-kube-api-access-4lv7n\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.199007 master-0 kubenswrapper[7620]: I0318 08:52:14.198965 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-catalog-content\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.199315 master-0 kubenswrapper[7620]: I0318 08:52:14.199040 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-utilities\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.204383 master-0 kubenswrapper[7620]: I0318 08:52:14.204309 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78szh"] Mar 18 08:52:14.227928 master-0 kubenswrapper[7620]: I0318 08:52:14.223327 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vng9w"] Mar 18 08:52:14.234371 master-0 kubenswrapper[7620]: I0318 08:52:14.234307 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jg58c"] Mar 18 08:52:14.300234 master-0 kubenswrapper[7620]: I0318 08:52:14.300191 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-catalog-content\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.300721 master-0 kubenswrapper[7620]: I0318 08:52:14.300690 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-utilities\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.300919 master-0 kubenswrapper[7620]: I0318 08:52:14.300897 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-catalog-content\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.301060 master-0 kubenswrapper[7620]: I0318 08:52:14.301042 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djq7n\" (UniqueName: \"kubernetes.io/projected/f65344cd-8571-4a78-927f-eec46ec1af51-kube-api-access-djq7n\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.301173 master-0 kubenswrapper[7620]: I0318 08:52:14.301157 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-catalog-content\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.301300 master-0 kubenswrapper[7620]: I0318 08:52:14.301277 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lv7n\" (UniqueName: \"kubernetes.io/projected/92542f7c-182b-45a8-bbf3-00e99ba7acee-kube-api-access-4lv7n\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.301438 master-0 kubenswrapper[7620]: I0318 08:52:14.301393 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-utilities\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.301438 master-0 kubenswrapper[7620]: I0318 08:52:14.301404 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-catalog-content\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.301542 master-0 kubenswrapper[7620]: I0318 08:52:14.301354 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-catalog-content\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.301625 master-0 kubenswrapper[7620]: I0318 08:52:14.301579 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-catalog-content\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.301818 master-0 kubenswrapper[7620]: I0318 08:52:14.301761 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-utilities\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.301938 master-0 kubenswrapper[7620]: I0318 08:52:14.301879 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-utilities\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.301938 master-0 kubenswrapper[7620]: I0318 08:52:14.301930 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-utilities\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.302034 master-0 kubenswrapper[7620]: I0318 08:52:14.301976 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfzdp\" (UniqueName: \"kubernetes.io/projected/a268d595-18c2-43a2-8ed5-eb64c76c490f-kube-api-access-hfzdp\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.302079 master-0 kubenswrapper[7620]: I0318 08:52:14.302038 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm6nf\" (UniqueName: \"kubernetes.io/projected/52e32e2d-33ab-4351-ae8a-80acd6077d70-kube-api-access-dm6nf\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.302210 master-0 kubenswrapper[7620]: I0318 08:52:14.302153 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-catalog-content\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.302681 master-0 kubenswrapper[7620]: I0318 08:52:14.302651 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-utilities\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.304275 master-0 kubenswrapper[7620]: I0318 08:52:14.304219 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-utilities\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.305216 master-0 kubenswrapper[7620]: I0318 08:52:14.305132 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-catalog-content\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.311103 master-0 kubenswrapper[7620]: I0318 08:52:14.306780 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-utilities\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.331500 master-0 kubenswrapper[7620]: I0318 08:52:14.331452 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lv7n\" (UniqueName: \"kubernetes.io/projected/92542f7c-182b-45a8-bbf3-00e99ba7acee-kube-api-access-4lv7n\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.338527 master-0 kubenswrapper[7620]: I0318 08:52:14.335935 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djq7n\" (UniqueName: \"kubernetes.io/projected/f65344cd-8571-4a78-927f-eec46ec1af51-kube-api-access-djq7n\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:14.338527 master-0 kubenswrapper[7620]: I0318 08:52:14.337319 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfzdp\" (UniqueName: \"kubernetes.io/projected/a268d595-18c2-43a2-8ed5-eb64c76c490f-kube-api-access-hfzdp\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.338899 master-0 kubenswrapper[7620]: I0318 08:52:14.338865 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm6nf\" (UniqueName: \"kubernetes.io/projected/52e32e2d-33ab-4351-ae8a-80acd6077d70-kube-api-access-dm6nf\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.500319 master-0 kubenswrapper[7620]: I0318 08:52:14.500252 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:14.529166 master-0 kubenswrapper[7620]: I0318 08:52:14.529054 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:14.569929 master-0 kubenswrapper[7620]: I0318 08:52:14.562807 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:14.591203 master-0 kubenswrapper[7620]: I0318 08:52:14.591151 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:15.081576 master-0 kubenswrapper[7620]: I0318 08:52:15.080350 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pk9z9"] Mar 18 08:52:15.162956 master-0 kubenswrapper[7620]: W0318 08:52:15.160662 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52e32e2d_33ab_4351_ae8a_80acd6077d70.slice/crio-6634f9815dab75e36ab077ad26870775c6b66428323ea93fb4028cdabc9be608 WatchSource:0}: Error finding container 6634f9815dab75e36ab077ad26870775c6b66428323ea93fb4028cdabc9be608: Status 404 returned error can't find the container with id 6634f9815dab75e36ab077ad26870775c6b66428323ea93fb4028cdabc9be608 Mar 18 08:52:15.191142 master-0 kubenswrapper[7620]: I0318 08:52:15.190239 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69"] Mar 18 08:52:15.191142 master-0 kubenswrapper[7620]: I0318 08:52:15.190556 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" containerName="kube-rbac-proxy" containerID="cri-o://bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35" gracePeriod=30 Mar 18 08:52:15.191142 master-0 kubenswrapper[7620]: I0318 08:52:15.190957 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" containerName="machine-approver-controller" containerID="cri-o://79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978" gracePeriod=30 Mar 18 08:52:15.196068 master-0 kubenswrapper[7620]: I0318 08:52:15.195933 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls"] Mar 18 08:52:15.197094 master-0 kubenswrapper[7620]: I0318 08:52:15.197073 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.197195 master-0 kubenswrapper[7620]: I0318 08:52:15.197167 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6"] Mar 18 08:52:15.204143 master-0 kubenswrapper[7620]: I0318 08:52:15.204111 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 08:52:15.213503 master-0 kubenswrapper[7620]: I0318 08:52:15.211402 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:52:15.213503 master-0 kubenswrapper[7620]: I0318 08:52:15.211651 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 08:52:15.213503 master-0 kubenswrapper[7620]: I0318 08:52:15.211760 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 08:52:15.213503 master-0 kubenswrapper[7620]: I0318 08:52:15.211922 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-68m6c" Mar 18 08:52:15.213503 master-0 kubenswrapper[7620]: I0318 08:52:15.212096 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 08:52:15.213503 master-0 kubenswrapper[7620]: I0318 08:52:15.212188 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 08:52:15.213503 master-0 kubenswrapper[7620]: I0318 08:52:15.212261 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jr5t6" Mar 18 08:52:15.213503 master-0 kubenswrapper[7620]: I0318 08:52:15.212622 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 08:52:15.239595 master-0 kubenswrapper[7620]: I0318 08:52:15.239554 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6"] Mar 18 08:52:15.315142 master-0 kubenswrapper[7620]: I0318 08:52:15.315107 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.315278 master-0 kubenswrapper[7620]: I0318 08:52:15.315263 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.315378 master-0 kubenswrapper[7620]: I0318 08:52:15.315365 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.315470 master-0 kubenswrapper[7620]: I0318 08:52:15.315458 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 08:52:15.315543 master-0 kubenswrapper[7620]: I0318 08:52:15.315532 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qnj\" (UniqueName: \"kubernetes.io/projected/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-kube-api-access-x8qnj\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.315612 master-0 kubenswrapper[7620]: I0318 08:52:15.315598 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.315687 master-0 kubenswrapper[7620]: I0318 08:52:15.315676 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlvd\" (UniqueName: \"kubernetes.io/projected/fc5a9875-d97e-4371-a15d-a1f43b85abce-kube-api-access-mvlvd\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 08:52:15.320113 master-0 kubenswrapper[7620]: I0318 08:52:15.288434 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-kv7n5"] Mar 18 08:52:15.321423 master-0 kubenswrapper[7620]: I0318 08:52:15.321387 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54"] Mar 18 08:52:15.322553 master-0 kubenswrapper[7620]: I0318 08:52:15.321602 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.329926 master-0 kubenswrapper[7620]: I0318 08:52:15.326963 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-zr4v5" Mar 18 08:52:15.329926 master-0 kubenswrapper[7620]: I0318 08:52:15.327166 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 08:52:15.329926 master-0 kubenswrapper[7620]: I0318 08:52:15.327297 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 08:52:15.329926 master-0 kubenswrapper[7620]: I0318 08:52:15.327397 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 08:52:15.329926 master-0 kubenswrapper[7620]: I0318 08:52:15.328915 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 08:52:15.331438 master-0 kubenswrapper[7620]: I0318 08:52:15.330336 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.332682 master-0 kubenswrapper[7620]: I0318 08:52:15.332522 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 08:52:15.335881 master-0 kubenswrapper[7620]: I0318 08:52:15.332824 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9"] Mar 18 08:52:15.335881 master-0 kubenswrapper[7620]: I0318 08:52:15.335292 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 08:52:15.335881 master-0 kubenswrapper[7620]: I0318 08:52:15.335471 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 08:52:15.337843 master-0 kubenswrapper[7620]: I0318 08:52:15.337032 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 08:52:15.337843 master-0 kubenswrapper[7620]: I0318 08:52:15.337191 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 08:52:15.337843 master-0 kubenswrapper[7620]: I0318 08:52:15.337311 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-jdt5h" Mar 18 08:52:15.337843 master-0 kubenswrapper[7620]: I0318 08:52:15.337594 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 08:52:15.339965 master-0 kubenswrapper[7620]: I0318 08:52:15.338604 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x"] Mar 18 08:52:15.339965 master-0 kubenswrapper[7620]: I0318 08:52:15.339152 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-kv7n5"] Mar 18 08:52:15.339965 master-0 kubenswrapper[7620]: I0318 08:52:15.339231 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.339965 master-0 kubenswrapper[7620]: I0318 08:52:15.339533 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.340418 master-0 kubenswrapper[7620]: I0318 08:52:15.340333 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 08:52:15.340524 master-0 kubenswrapper[7620]: I0318 08:52:15.340497 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 08:52:15.342068 master-0 kubenswrapper[7620]: I0318 08:52:15.341971 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54"] Mar 18 08:52:15.348941 master-0 kubenswrapper[7620]: I0318 08:52:15.342212 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-2zcks" Mar 18 08:52:15.348941 master-0 kubenswrapper[7620]: I0318 08:52:15.342325 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 08:52:15.348941 master-0 kubenswrapper[7620]: I0318 08:52:15.342335 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 08:52:15.348941 master-0 kubenswrapper[7620]: I0318 08:52:15.342371 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 08:52:15.348941 master-0 kubenswrapper[7620]: I0318 08:52:15.342466 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-59m7s" Mar 18 08:52:15.348941 master-0 kubenswrapper[7620]: I0318 08:52:15.344171 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9"] Mar 18 08:52:15.349482 master-0 kubenswrapper[7620]: I0318 08:52:15.349254 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x"] Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416655 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416723 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416761 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416788 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416817 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416840 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416920 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/31a92270-efed-44fe-871e-90333235e85f-snapshots\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416947 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vfrs\" (UniqueName: \"kubernetes.io/projected/ffc5379c-651f-490c-90f4-1285b9093596-kube-api-access-4vfrs\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.416975 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417005 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417034 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417062 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417089 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417120 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnspk\" (UniqueName: \"kubernetes.io/projected/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-kube-api-access-jnspk\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417416 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417446 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8qnj\" (UniqueName: \"kubernetes.io/projected/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-kube-api-access-x8qnj\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417487 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417517 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw5tw\" (UniqueName: \"kubernetes.io/projected/b9768e50-c883-47b0-b319-851fa53ac19a-kube-api-access-bw5tw\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417541 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhfh\" (UniqueName: \"kubernetes.io/projected/31a92270-efed-44fe-871e-90333235e85f-kube-api-access-8zhfh\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417600 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlvd\" (UniqueName: \"kubernetes.io/projected/fc5a9875-d97e-4371-a15d-a1f43b85abce-kube-api-access-mvlvd\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417640 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417666 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.417694 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.419206 master-0 kubenswrapper[7620]: I0318 08:52:15.418370 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.420003 master-0 kubenswrapper[7620]: I0318 08:52:15.419329 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.420003 master-0 kubenswrapper[7620]: I0318 08:52:15.419878 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.421787 master-0 kubenswrapper[7620]: I0318 08:52:15.421749 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 08:52:15.428913 master-0 kubenswrapper[7620]: I0318 08:52:15.427010 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.435605 master-0 kubenswrapper[7620]: I0318 08:52:15.435112 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8qnj\" (UniqueName: \"kubernetes.io/projected/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-kube-api-access-x8qnj\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-nbdls\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.439741 master-0 kubenswrapper[7620]: I0318 08:52:15.438740 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlvd\" (UniqueName: \"kubernetes.io/projected/fc5a9875-d97e-4371-a15d-a1f43b85abce-kube-api-access-mvlvd\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 08:52:15.468292 master-0 kubenswrapper[7620]: I0318 08:52:15.468204 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78szh"] Mar 18 08:52:15.477988 master-0 kubenswrapper[7620]: W0318 08:52:15.477848 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92542f7c_182b_45a8_bbf3_00e99ba7acee.slice/crio-7375d00faec570babb78f641885c44d45133bd27ded2430ca3ed60792534d150 WatchSource:0}: Error finding container 7375d00faec570babb78f641885c44d45133bd27ded2430ca3ed60792534d150: Status 404 returned error can't find the container with id 7375d00faec570babb78f641885c44d45133bd27ded2430ca3ed60792534d150 Mar 18 08:52:15.501801 master-0 kubenswrapper[7620]: I0318 08:52:15.501742 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:52:15.510326 master-0 kubenswrapper[7620]: I0318 08:52:15.510273 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:15.518602 master-0 kubenswrapper[7620]: I0318 08:52:15.518522 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.518602 master-0 kubenswrapper[7620]: I0318 08:52:15.518580 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518609 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518648 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/31a92270-efed-44fe-871e-90333235e85f-snapshots\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518681 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vfrs\" (UniqueName: \"kubernetes.io/projected/ffc5379c-651f-490c-90f4-1285b9093596-kube-api-access-4vfrs\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518710 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518746 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518776 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518802 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518836 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnspk\" (UniqueName: \"kubernetes.io/projected/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-kube-api-access-jnspk\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.518882 master-0 kubenswrapper[7620]: I0318 08:52:15.518885 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw5tw\" (UniqueName: \"kubernetes.io/projected/b9768e50-c883-47b0-b319-851fa53ac19a-kube-api-access-bw5tw\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.519287 master-0 kubenswrapper[7620]: I0318 08:52:15.518910 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhfh\" (UniqueName: \"kubernetes.io/projected/31a92270-efed-44fe-871e-90333235e85f-kube-api-access-8zhfh\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.519287 master-0 kubenswrapper[7620]: I0318 08:52:15.518945 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.519287 master-0 kubenswrapper[7620]: I0318 08:52:15.518969 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.519287 master-0 kubenswrapper[7620]: I0318 08:52:15.518992 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.519287 master-0 kubenswrapper[7620]: I0318 08:52:15.519026 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.520176 master-0 kubenswrapper[7620]: I0318 08:52:15.520136 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.521424 master-0 kubenswrapper[7620]: I0318 08:52:15.521387 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.524101 master-0 kubenswrapper[7620]: I0318 08:52:15.523771 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.524101 master-0 kubenswrapper[7620]: I0318 08:52:15.523999 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/31a92270-efed-44fe-871e-90333235e85f-snapshots\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.524101 master-0 kubenswrapper[7620]: I0318 08:52:15.524064 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.524320 master-0 kubenswrapper[7620]: I0318 08:52:15.524176 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.524441 master-0 kubenswrapper[7620]: I0318 08:52:15.524409 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.526212 master-0 kubenswrapper[7620]: I0318 08:52:15.526185 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.531143 master-0 kubenswrapper[7620]: I0318 08:52:15.531073 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.532558 master-0 kubenswrapper[7620]: I0318 08:52:15.532494 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.533675 master-0 kubenswrapper[7620]: I0318 08:52:15.533483 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.539082 master-0 kubenswrapper[7620]: I0318 08:52:15.538424 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.546683 master-0 kubenswrapper[7620]: I0318 08:52:15.544397 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw5tw\" (UniqueName: \"kubernetes.io/projected/b9768e50-c883-47b0-b319-851fa53ac19a-kube-api-access-bw5tw\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.546683 master-0 kubenswrapper[7620]: I0318 08:52:15.546957 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnspk\" (UniqueName: \"kubernetes.io/projected/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-kube-api-access-jnspk\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.549210 master-0 kubenswrapper[7620]: I0318 08:52:15.548836 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vfrs\" (UniqueName: \"kubernetes.io/projected/ffc5379c-651f-490c-90f4-1285b9093596-kube-api-access-4vfrs\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.566482 master-0 kubenswrapper[7620]: I0318 08:52:15.566370 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhfh\" (UniqueName: \"kubernetes.io/projected/31a92270-efed-44fe-871e-90333235e85f-kube-api-access-8zhfh\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.572018 master-0 kubenswrapper[7620]: I0318 08:52:15.571983 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 08:52:15.586813 master-0 kubenswrapper[7620]: I0318 08:52:15.586780 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vng9w"] Mar 18 08:52:15.596027 master-0 kubenswrapper[7620]: W0318 08:52:15.592683 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda268d595_18c2_43a2_8ed5_eb64c76c490f.slice/crio-d4eadecdf9a3a2b8f4413e3b5de43801a78ed52767f124bb85a08953e8d985e4 WatchSource:0}: Error finding container d4eadecdf9a3a2b8f4413e3b5de43801a78ed52767f124bb85a08953e8d985e4: Status 404 returned error can't find the container with id d4eadecdf9a3a2b8f4413e3b5de43801a78ed52767f124bb85a08953e8d985e4 Mar 18 08:52:15.620702 master-0 kubenswrapper[7620]: W0318 08:52:15.620306 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65344cd_8571_4a78_927f_eec46ec1af51.slice/crio-1bf9cb47892d0288027c6bb37223daf6c06c5b704eeeaa16637e3e622b28899a WatchSource:0}: Error finding container 1bf9cb47892d0288027c6bb37223daf6c06c5b704eeeaa16637e3e622b28899a: Status 404 returned error can't find the container with id 1bf9cb47892d0288027c6bb37223daf6c06c5b704eeeaa16637e3e622b28899a Mar 18 08:52:15.649138 master-0 kubenswrapper[7620]: I0318 08:52:15.649106 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jg58c"] Mar 18 08:52:15.675118 master-0 kubenswrapper[7620]: I0318 08:52:15.675029 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 08:52:15.688659 master-0 kubenswrapper[7620]: I0318 08:52:15.688584 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 08:52:15.778611 master-0 kubenswrapper[7620]: I0318 08:52:15.778239 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 08:52:15.827760 master-0 kubenswrapper[7620]: I0318 08:52:15.827671 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 08:52:15.849869 master-0 kubenswrapper[7620]: I0318 08:52:15.848496 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:52:16.005450 master-0 kubenswrapper[7620]: I0318 08:52:16.005377 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6"] Mar 18 08:52:16.028459 master-0 kubenswrapper[7620]: W0318 08:52:16.028161 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc5a9875_d97e_4371_a15d_a1f43b85abce.slice/crio-4cc1a3bde7a78af95462a4b4f6ce986942ed4140ae91386507e1857084f8fcea WatchSource:0}: Error finding container 4cc1a3bde7a78af95462a4b4f6ce986942ed4140ae91386507e1857084f8fcea: Status 404 returned error can't find the container with id 4cc1a3bde7a78af95462a4b4f6ce986942ed4140ae91386507e1857084f8fcea Mar 18 08:52:16.036227 master-0 kubenswrapper[7620]: I0318 08:52:16.036195 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5956076c-a98f-4846-9a68-81c18211a5c8-machine-approver-tls\") pod \"5956076c-a98f-4846-9a68-81c18211a5c8\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " Mar 18 08:52:16.036316 master-0 kubenswrapper[7620]: I0318 08:52:16.036266 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-config\") pod \"5956076c-a98f-4846-9a68-81c18211a5c8\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " Mar 18 08:52:16.036366 master-0 kubenswrapper[7620]: I0318 08:52:16.036318 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf9qq\" (UniqueName: \"kubernetes.io/projected/5956076c-a98f-4846-9a68-81c18211a5c8-kube-api-access-jf9qq\") pod \"5956076c-a98f-4846-9a68-81c18211a5c8\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " Mar 18 08:52:16.036366 master-0 kubenswrapper[7620]: I0318 08:52:16.036356 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-auth-proxy-config\") pod \"5956076c-a98f-4846-9a68-81c18211a5c8\" (UID: \"5956076c-a98f-4846-9a68-81c18211a5c8\") " Mar 18 08:52:16.036976 master-0 kubenswrapper[7620]: I0318 08:52:16.036779 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-config" (OuterVolumeSpecName: "config") pod "5956076c-a98f-4846-9a68-81c18211a5c8" (UID: "5956076c-a98f-4846-9a68-81c18211a5c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:52:16.036976 master-0 kubenswrapper[7620]: I0318 08:52:16.036908 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "5956076c-a98f-4846-9a68-81c18211a5c8" (UID: "5956076c-a98f-4846-9a68-81c18211a5c8"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:52:16.041231 master-0 kubenswrapper[7620]: I0318 08:52:16.041166 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5956076c-a98f-4846-9a68-81c18211a5c8-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "5956076c-a98f-4846-9a68-81c18211a5c8" (UID: "5956076c-a98f-4846-9a68-81c18211a5c8"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:52:16.046267 master-0 kubenswrapper[7620]: I0318 08:52:16.045957 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5956076c-a98f-4846-9a68-81c18211a5c8-kube-api-access-jf9qq" (OuterVolumeSpecName: "kube-api-access-jf9qq") pod "5956076c-a98f-4846-9a68-81c18211a5c8" (UID: "5956076c-a98f-4846-9a68-81c18211a5c8"). InnerVolumeSpecName "kube-api-access-jf9qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:52:16.086546 master-0 kubenswrapper[7620]: I0318 08:52:16.086387 7620 generic.go:334] "Generic (PLEG): container finished" podID="52e32e2d-33ab-4351-ae8a-80acd6077d70" containerID="f1681da17a74338c034d7dc91920cd7fa391334049c5dee2d2d6586f7e2d97b5" exitCode=0 Mar 18 08:52:16.086546 master-0 kubenswrapper[7620]: I0318 08:52:16.086465 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk9z9" event={"ID":"52e32e2d-33ab-4351-ae8a-80acd6077d70","Type":"ContainerDied","Data":"f1681da17a74338c034d7dc91920cd7fa391334049c5dee2d2d6586f7e2d97b5"} Mar 18 08:52:16.086546 master-0 kubenswrapper[7620]: I0318 08:52:16.086550 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk9z9" event={"ID":"52e32e2d-33ab-4351-ae8a-80acd6077d70","Type":"ContainerStarted","Data":"6634f9815dab75e36ab077ad26870775c6b66428323ea93fb4028cdabc9be608"} Mar 18 08:52:16.096445 master-0 kubenswrapper[7620]: I0318 08:52:16.096275 7620 generic.go:334] "Generic (PLEG): container finished" podID="5956076c-a98f-4846-9a68-81c18211a5c8" containerID="79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978" exitCode=0 Mar 18 08:52:16.096445 master-0 kubenswrapper[7620]: I0318 08:52:16.096301 7620 generic.go:334] "Generic (PLEG): container finished" podID="5956076c-a98f-4846-9a68-81c18211a5c8" containerID="bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35" exitCode=0 Mar 18 08:52:16.096445 master-0 kubenswrapper[7620]: I0318 08:52:16.096368 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" event={"ID":"5956076c-a98f-4846-9a68-81c18211a5c8","Type":"ContainerDied","Data":"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978"} Mar 18 08:52:16.097917 master-0 kubenswrapper[7620]: I0318 08:52:16.096416 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" event={"ID":"5956076c-a98f-4846-9a68-81c18211a5c8","Type":"ContainerDied","Data":"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35"} Mar 18 08:52:16.097917 master-0 kubenswrapper[7620]: I0318 08:52:16.097908 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" event={"ID":"5956076c-a98f-4846-9a68-81c18211a5c8","Type":"ContainerDied","Data":"2c446c191e6a35b6bb10e2916b38e6cd1d112507feaa55170c5bfc4a8449236e"} Mar 18 08:52:16.098030 master-0 kubenswrapper[7620]: I0318 08:52:16.097930 7620 scope.go:117] "RemoveContainer" containerID="79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978" Mar 18 08:52:16.098113 master-0 kubenswrapper[7620]: I0318 08:52:16.098078 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69" Mar 18 08:52:16.100616 master-0 kubenswrapper[7620]: I0318 08:52:16.100578 7620 generic.go:334] "Generic (PLEG): container finished" podID="f65344cd-8571-4a78-927f-eec46ec1af51" containerID="74d7e74934812b2b075e232eef44fa1c57bdc06f53f3181da801a35e02650482" exitCode=0 Mar 18 08:52:16.100675 master-0 kubenswrapper[7620]: I0318 08:52:16.100635 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg58c" event={"ID":"f65344cd-8571-4a78-927f-eec46ec1af51","Type":"ContainerDied","Data":"74d7e74934812b2b075e232eef44fa1c57bdc06f53f3181da801a35e02650482"} Mar 18 08:52:16.100675 master-0 kubenswrapper[7620]: I0318 08:52:16.100655 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg58c" event={"ID":"f65344cd-8571-4a78-927f-eec46ec1af51","Type":"ContainerStarted","Data":"1bf9cb47892d0288027c6bb37223daf6c06c5b704eeeaa16637e3e622b28899a"} Mar 18 08:52:16.101464 master-0 kubenswrapper[7620]: I0318 08:52:16.101433 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" event={"ID":"fc5a9875-d97e-4371-a15d-a1f43b85abce","Type":"ContainerStarted","Data":"4cc1a3bde7a78af95462a4b4f6ce986942ed4140ae91386507e1857084f8fcea"} Mar 18 08:52:16.109267 master-0 kubenswrapper[7620]: I0318 08:52:16.102842 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" event={"ID":"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4","Type":"ContainerStarted","Data":"ddac4a396028feae59dbc61cc740a3f14012ee9a158265e6a666c8a8e0d16068"} Mar 18 08:52:16.109267 master-0 kubenswrapper[7620]: I0318 08:52:16.106148 7620 generic.go:334] "Generic (PLEG): container finished" podID="92542f7c-182b-45a8-bbf3-00e99ba7acee" containerID="83cd147764ec185f1c61933eb40e43bfd7feace1c1937bc4d75f521b8846c76e" exitCode=0 Mar 18 08:52:16.109267 master-0 kubenswrapper[7620]: I0318 08:52:16.106202 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78szh" event={"ID":"92542f7c-182b-45a8-bbf3-00e99ba7acee","Type":"ContainerDied","Data":"83cd147764ec185f1c61933eb40e43bfd7feace1c1937bc4d75f521b8846c76e"} Mar 18 08:52:16.109267 master-0 kubenswrapper[7620]: I0318 08:52:16.106337 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78szh" event={"ID":"92542f7c-182b-45a8-bbf3-00e99ba7acee","Type":"ContainerStarted","Data":"7375d00faec570babb78f641885c44d45133bd27ded2430ca3ed60792534d150"} Mar 18 08:52:16.109960 master-0 kubenswrapper[7620]: I0318 08:52:16.109916 7620 generic.go:334] "Generic (PLEG): container finished" podID="a268d595-18c2-43a2-8ed5-eb64c76c490f" containerID="4e6504c0fa849fb56cf305c3b2b7aa1db21a051c51fa14d99a8ddcac1a32ab11" exitCode=0 Mar 18 08:52:16.109960 master-0 kubenswrapper[7620]: I0318 08:52:16.109955 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vng9w" event={"ID":"a268d595-18c2-43a2-8ed5-eb64c76c490f","Type":"ContainerDied","Data":"4e6504c0fa849fb56cf305c3b2b7aa1db21a051c51fa14d99a8ddcac1a32ab11"} Mar 18 08:52:16.110072 master-0 kubenswrapper[7620]: I0318 08:52:16.109977 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vng9w" event={"ID":"a268d595-18c2-43a2-8ed5-eb64c76c490f","Type":"ContainerStarted","Data":"d4eadecdf9a3a2b8f4413e3b5de43801a78ed52767f124bb85a08953e8d985e4"} Mar 18 08:52:16.115675 master-0 kubenswrapper[7620]: I0318 08:52:16.112954 7620 scope.go:117] "RemoveContainer" containerID="bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35" Mar 18 08:52:16.129305 master-0 kubenswrapper[7620]: I0318 08:52:16.129269 7620 scope.go:117] "RemoveContainer" containerID="79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978" Mar 18 08:52:16.132360 master-0 kubenswrapper[7620]: E0318 08:52:16.129842 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978\": container with ID starting with 79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978 not found: ID does not exist" containerID="79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978" Mar 18 08:52:16.132360 master-0 kubenswrapper[7620]: I0318 08:52:16.130075 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978"} err="failed to get container status \"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978\": rpc error: code = NotFound desc = could not find container \"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978\": container with ID starting with 79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978 not found: ID does not exist" Mar 18 08:52:16.132360 master-0 kubenswrapper[7620]: I0318 08:52:16.130118 7620 scope.go:117] "RemoveContainer" containerID="bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35" Mar 18 08:52:16.132360 master-0 kubenswrapper[7620]: E0318 08:52:16.130426 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35\": container with ID starting with bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35 not found: ID does not exist" containerID="bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35" Mar 18 08:52:16.132360 master-0 kubenswrapper[7620]: I0318 08:52:16.130475 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35"} err="failed to get container status \"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35\": rpc error: code = NotFound desc = could not find container \"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35\": container with ID starting with bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35 not found: ID does not exist" Mar 18 08:52:16.132360 master-0 kubenswrapper[7620]: I0318 08:52:16.130513 7620 scope.go:117] "RemoveContainer" containerID="79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978" Mar 18 08:52:16.139834 master-0 kubenswrapper[7620]: I0318 08:52:16.139782 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978"} err="failed to get container status \"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978\": rpc error: code = NotFound desc = could not find container \"79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978\": container with ID starting with 79b18f7a261663b919aad8c218f560610ac74c448adf1ac2c7e8d949870a5978 not found: ID does not exist" Mar 18 08:52:16.139834 master-0 kubenswrapper[7620]: I0318 08:52:16.139823 7620 scope.go:117] "RemoveContainer" containerID="bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35" Mar 18 08:52:16.140319 master-0 kubenswrapper[7620]: I0318 08:52:16.140289 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35"} err="failed to get container status \"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35\": rpc error: code = NotFound desc = could not find container \"bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35\": container with ID starting with bde273c22060dd904a8cc445a9f945202e9a96fdccceff3c588852bf6a168d35 not found: ID does not exist" Mar 18 08:52:16.141628 master-0 kubenswrapper[7620]: I0318 08:52:16.141536 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf9qq\" (UniqueName: \"kubernetes.io/projected/5956076c-a98f-4846-9a68-81c18211a5c8-kube-api-access-jf9qq\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:16.141628 master-0 kubenswrapper[7620]: I0318 08:52:16.141555 7620 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:16.141628 master-0 kubenswrapper[7620]: I0318 08:52:16.141564 7620 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5956076c-a98f-4846-9a68-81c18211a5c8-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:16.141628 master-0 kubenswrapper[7620]: I0318 08:52:16.141576 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5956076c-a98f-4846-9a68-81c18211a5c8-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:16.182888 master-0 kubenswrapper[7620]: I0318 08:52:16.182811 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69"] Mar 18 08:52:16.186778 master-0 kubenswrapper[7620]: I0318 08:52:16.186717 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-sxx69"] Mar 18 08:52:16.193654 master-0 kubenswrapper[7620]: I0318 08:52:16.193361 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54"] Mar 18 08:52:16.200825 master-0 kubenswrapper[7620]: W0318 08:52:16.200748 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40f3b7a4_107c_4f1d_a3ab_b5d2309c373b.slice/crio-91da701859683e09bbd69c5ea46a27c0da629a0940ac397355b74f2e9d28cde0 WatchSource:0}: Error finding container 91da701859683e09bbd69c5ea46a27c0da629a0940ac397355b74f2e9d28cde0: Status 404 returned error can't find the container with id 91da701859683e09bbd69c5ea46a27c0da629a0940ac397355b74f2e9d28cde0 Mar 18 08:52:16.225680 master-0 kubenswrapper[7620]: I0318 08:52:16.225080 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl"] Mar 18 08:52:16.225680 master-0 kubenswrapper[7620]: E0318 08:52:16.225413 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" containerName="kube-rbac-proxy" Mar 18 08:52:16.225680 master-0 kubenswrapper[7620]: I0318 08:52:16.225436 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" containerName="kube-rbac-proxy" Mar 18 08:52:16.225680 master-0 kubenswrapper[7620]: E0318 08:52:16.225466 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" containerName="machine-approver-controller" Mar 18 08:52:16.225680 master-0 kubenswrapper[7620]: I0318 08:52:16.225479 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" containerName="machine-approver-controller" Mar 18 08:52:16.225680 master-0 kubenswrapper[7620]: I0318 08:52:16.225662 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" containerName="machine-approver-controller" Mar 18 08:52:16.225680 master-0 kubenswrapper[7620]: I0318 08:52:16.225693 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" containerName="kube-rbac-proxy" Mar 18 08:52:16.232274 master-0 kubenswrapper[7620]: I0318 08:52:16.232244 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.234921 master-0 kubenswrapper[7620]: I0318 08:52:16.234891 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 08:52:16.235276 master-0 kubenswrapper[7620]: I0318 08:52:16.235026 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 08:52:16.235276 master-0 kubenswrapper[7620]: I0318 08:52:16.235172 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 08:52:16.235601 master-0 kubenswrapper[7620]: I0318 08:52:16.235583 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 08:52:16.235810 master-0 kubenswrapper[7620]: I0318 08:52:16.235791 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 08:52:16.236418 master-0 kubenswrapper[7620]: I0318 08:52:16.235971 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-s7cph" Mar 18 08:52:16.244886 master-0 kubenswrapper[7620]: I0318 08:52:16.244846 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qrqx\" (UniqueName: \"kubernetes.io/projected/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-kube-api-access-5qrqx\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.244986 master-0 kubenswrapper[7620]: I0318 08:52:16.244953 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.245071 master-0 kubenswrapper[7620]: I0318 08:52:16.245032 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.245211 master-0 kubenswrapper[7620]: I0318 08:52:16.245182 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.250839 master-0 kubenswrapper[7620]: I0318 08:52:16.249012 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5956076c-a98f-4846-9a68-81c18211a5c8" path="/var/lib/kubelet/pods/5956076c-a98f-4846-9a68-81c18211a5c8/volumes" Mar 18 08:52:16.250839 master-0 kubenswrapper[7620]: I0318 08:52:16.250307 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-kv7n5"] Mar 18 08:52:16.259433 master-0 kubenswrapper[7620]: W0318 08:52:16.259333 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31a92270_efed_44fe_871e_90333235e85f.slice/crio-4fb480fe238d2202b063fb165afa539e61290f53ee162d859e36d1d4cd81bfd5 WatchSource:0}: Error finding container 4fb480fe238d2202b063fb165afa539e61290f53ee162d859e36d1d4cd81bfd5: Status 404 returned error can't find the container with id 4fb480fe238d2202b063fb165afa539e61290f53ee162d859e36d1d4cd81bfd5 Mar 18 08:52:16.303989 master-0 kubenswrapper[7620]: I0318 08:52:16.303892 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x"] Mar 18 08:52:16.304805 master-0 kubenswrapper[7620]: W0318 08:52:16.304692 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc5379c_651f_490c_90f4_1285b9093596.slice/crio-c62bfe26cbaa5afe7741b2ad05574cf96716a998721d303299c76986059ad0d0 WatchSource:0}: Error finding container c62bfe26cbaa5afe7741b2ad05574cf96716a998721d303299c76986059ad0d0: Status 404 returned error can't find the container with id c62bfe26cbaa5afe7741b2ad05574cf96716a998721d303299c76986059ad0d0 Mar 18 08:52:16.350073 master-0 kubenswrapper[7620]: I0318 08:52:16.348019 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.350073 master-0 kubenswrapper[7620]: I0318 08:52:16.348064 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.350073 master-0 kubenswrapper[7620]: I0318 08:52:16.348150 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.350073 master-0 kubenswrapper[7620]: I0318 08:52:16.348197 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qrqx\" (UniqueName: \"kubernetes.io/projected/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-kube-api-access-5qrqx\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.350073 master-0 kubenswrapper[7620]: I0318 08:52:16.350070 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.355421 master-0 kubenswrapper[7620]: I0318 08:52:16.350169 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.367705 master-0 kubenswrapper[7620]: I0318 08:52:16.365751 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.368902 master-0 kubenswrapper[7620]: I0318 08:52:16.368846 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qrqx\" (UniqueName: \"kubernetes.io/projected/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-kube-api-access-5qrqx\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.379245 master-0 kubenswrapper[7620]: I0318 08:52:16.379102 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9"] Mar 18 08:52:16.567126 master-0 kubenswrapper[7620]: I0318 08:52:16.567065 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 08:52:16.878257 master-0 kubenswrapper[7620]: I0318 08:52:16.878188 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 08:52:17.156421 master-0 kubenswrapper[7620]: I0318 08:52:17.155245 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" event={"ID":"b9768e50-c883-47b0-b319-851fa53ac19a","Type":"ContainerStarted","Data":"5176e410694270060178f89b7e09cd0207b570d728e7e317b10246721df9c24c"} Mar 18 08:52:17.156421 master-0 kubenswrapper[7620]: I0318 08:52:17.155306 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" event={"ID":"b9768e50-c883-47b0-b319-851fa53ac19a","Type":"ContainerStarted","Data":"2c337c8902968583bee083c15c603882d48753850a36d0d861e8e0df75e9ad06"} Mar 18 08:52:17.190134 master-0 kubenswrapper[7620]: I0318 08:52:17.189726 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" event={"ID":"495e0cff-fca8-4dad-9247-2fc0e7ce86fc","Type":"ContainerStarted","Data":"cffb298b03478aae3739c1233e2989190dd7bfd0ce3ccadd49da1ff614afed86"} Mar 18 08:52:17.190134 master-0 kubenswrapper[7620]: I0318 08:52:17.189797 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" event={"ID":"495e0cff-fca8-4dad-9247-2fc0e7ce86fc","Type":"ContainerStarted","Data":"7d9881841018d229060672bdf33946e413258966dde9be04451521b3c0265667"} Mar 18 08:52:17.196629 master-0 kubenswrapper[7620]: I0318 08:52:17.196410 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" event={"ID":"ffc5379c-651f-490c-90f4-1285b9093596","Type":"ContainerStarted","Data":"5c29da503c00e15977a4ecd2b0042c311f84a9b1a06355d868f810741a8b216e"} Mar 18 08:52:17.196629 master-0 kubenswrapper[7620]: I0318 08:52:17.196619 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" event={"ID":"ffc5379c-651f-490c-90f4-1285b9093596","Type":"ContainerStarted","Data":"c62bfe26cbaa5afe7741b2ad05574cf96716a998721d303299c76986059ad0d0"} Mar 18 08:52:17.201941 master-0 kubenswrapper[7620]: I0318 08:52:17.201179 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" event={"ID":"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b","Type":"ContainerStarted","Data":"c09a6c744705b8277d122bcf1a5dd9dfdf5e10728a678aea965c75bb194d4820"} Mar 18 08:52:17.201941 master-0 kubenswrapper[7620]: I0318 08:52:17.201218 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" event={"ID":"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b","Type":"ContainerStarted","Data":"91e4bdfdf4ca5ac9dc8f538728ef2c893233008dab03fba9e542ec3dba798b14"} Mar 18 08:52:17.201941 master-0 kubenswrapper[7620]: I0318 08:52:17.201228 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" event={"ID":"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b","Type":"ContainerStarted","Data":"91da701859683e09bbd69c5ea46a27c0da629a0940ac397355b74f2e9d28cde0"} Mar 18 08:52:17.208773 master-0 kubenswrapper[7620]: I0318 08:52:17.208710 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vng9w" event={"ID":"a268d595-18c2-43a2-8ed5-eb64c76c490f","Type":"ContainerStarted","Data":"42f23b18ac970e3da9687bbb84eb7ea3c73aad4f1a6ef5df47db5bc94e10804e"} Mar 18 08:52:17.214005 master-0 kubenswrapper[7620]: I0318 08:52:17.213900 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk9z9" event={"ID":"52e32e2d-33ab-4351-ae8a-80acd6077d70","Type":"ContainerStarted","Data":"570abd7afd841c39fdf3ec02f6786671fcd82b78141a177d4622bd38088a5759"} Mar 18 08:52:17.216704 master-0 kubenswrapper[7620]: I0318 08:52:17.216584 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg58c" event={"ID":"f65344cd-8571-4a78-927f-eec46ec1af51","Type":"ContainerStarted","Data":"763ae2339eb63a918ff19ddcb00ca5fa223a5d7c07aecf5c680ab374869c6485"} Mar 18 08:52:17.220396 master-0 kubenswrapper[7620]: I0318 08:52:17.220358 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" event={"ID":"31a92270-efed-44fe-871e-90333235e85f","Type":"ContainerStarted","Data":"4fb480fe238d2202b063fb165afa539e61290f53ee162d859e36d1d4cd81bfd5"} Mar 18 08:52:17.232497 master-0 kubenswrapper[7620]: I0318 08:52:17.226529 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78szh" event={"ID":"92542f7c-182b-45a8-bbf3-00e99ba7acee","Type":"ContainerStarted","Data":"59ae026604cd04ce353fa378aa4e158633279c635c9ea30620458e2ad2301dcf"} Mar 18 08:52:17.237918 master-0 kubenswrapper[7620]: I0318 08:52:17.236193 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" podStartSLOduration=2.236170565 podStartE2EDuration="2.236170565s" podCreationTimestamp="2026-03-18 08:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:52:17.223344207 +0000 UTC m=+201.218125969" watchObservedRunningTime="2026-03-18 08:52:17.236170565 +0000 UTC m=+201.230952317" Mar 18 08:52:18.249378 master-0 kubenswrapper[7620]: I0318 08:52:18.249000 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk9z9" event={"ID":"52e32e2d-33ab-4351-ae8a-80acd6077d70","Type":"ContainerDied","Data":"570abd7afd841c39fdf3ec02f6786671fcd82b78141a177d4622bd38088a5759"} Mar 18 08:52:18.249378 master-0 kubenswrapper[7620]: I0318 08:52:18.248965 7620 generic.go:334] "Generic (PLEG): container finished" podID="52e32e2d-33ab-4351-ae8a-80acd6077d70" containerID="570abd7afd841c39fdf3ec02f6786671fcd82b78141a177d4622bd38088a5759" exitCode=0 Mar 18 08:52:18.253811 master-0 kubenswrapper[7620]: I0318 08:52:18.253764 7620 generic.go:334] "Generic (PLEG): container finished" podID="f65344cd-8571-4a78-927f-eec46ec1af51" containerID="763ae2339eb63a918ff19ddcb00ca5fa223a5d7c07aecf5c680ab374869c6485" exitCode=0 Mar 18 08:52:18.253914 master-0 kubenswrapper[7620]: I0318 08:52:18.253864 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg58c" event={"ID":"f65344cd-8571-4a78-927f-eec46ec1af51","Type":"ContainerDied","Data":"763ae2339eb63a918ff19ddcb00ca5fa223a5d7c07aecf5c680ab374869c6485"} Mar 18 08:52:18.258371 master-0 kubenswrapper[7620]: I0318 08:52:18.258316 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" event={"ID":"495e0cff-fca8-4dad-9247-2fc0e7ce86fc","Type":"ContainerStarted","Data":"482a2a455c91ae8f75a1b491f54c3f841099d7f9c064cccb7d26f482c03b17d7"} Mar 18 08:52:18.262775 master-0 kubenswrapper[7620]: I0318 08:52:18.262618 7620 generic.go:334] "Generic (PLEG): container finished" podID="92542f7c-182b-45a8-bbf3-00e99ba7acee" containerID="59ae026604cd04ce353fa378aa4e158633279c635c9ea30620458e2ad2301dcf" exitCode=0 Mar 18 08:52:18.262775 master-0 kubenswrapper[7620]: I0318 08:52:18.262697 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78szh" event={"ID":"92542f7c-182b-45a8-bbf3-00e99ba7acee","Type":"ContainerDied","Data":"59ae026604cd04ce353fa378aa4e158633279c635c9ea30620458e2ad2301dcf"} Mar 18 08:52:18.262775 master-0 kubenswrapper[7620]: I0318 08:52:18.262727 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78szh" event={"ID":"92542f7c-182b-45a8-bbf3-00e99ba7acee","Type":"ContainerStarted","Data":"63d7ddefdbf50f17d899da83671e986d1b36683b1995a157c431189727728e55"} Mar 18 08:52:18.266163 master-0 kubenswrapper[7620]: I0318 08:52:18.266092 7620 generic.go:334] "Generic (PLEG): container finished" podID="a268d595-18c2-43a2-8ed5-eb64c76c490f" containerID="42f23b18ac970e3da9687bbb84eb7ea3c73aad4f1a6ef5df47db5bc94e10804e" exitCode=0 Mar 18 08:52:18.266254 master-0 kubenswrapper[7620]: I0318 08:52:18.266175 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vng9w" event={"ID":"a268d595-18c2-43a2-8ed5-eb64c76c490f","Type":"ContainerDied","Data":"42f23b18ac970e3da9687bbb84eb7ea3c73aad4f1a6ef5df47db5bc94e10804e"} Mar 18 08:52:18.323643 master-0 kubenswrapper[7620]: I0318 08:52:18.323453 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-78szh" podStartSLOduration=26.380168547 podStartE2EDuration="28.323433793s" podCreationTimestamp="2026-03-18 08:51:50 +0000 UTC" firstStartedPulling="2026-03-18 08:52:16.113004045 +0000 UTC m=+200.107785807" lastFinishedPulling="2026-03-18 08:52:18.056269301 +0000 UTC m=+202.051051053" observedRunningTime="2026-03-18 08:52:18.318429113 +0000 UTC m=+202.313210885" watchObservedRunningTime="2026-03-18 08:52:18.323433793 +0000 UTC m=+202.318215555" Mar 18 08:52:18.336602 master-0 kubenswrapper[7620]: I0318 08:52:18.336536 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" podStartSLOduration=2.336515748 podStartE2EDuration="2.336515748s" podCreationTimestamp="2026-03-18 08:52:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:52:18.333485764 +0000 UTC m=+202.328267536" watchObservedRunningTime="2026-03-18 08:52:18.336515748 +0000 UTC m=+202.331297500" Mar 18 08:52:19.308511 master-0 kubenswrapper[7620]: I0318 08:52:19.308457 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qsj46"] Mar 18 08:52:19.310705 master-0 kubenswrapper[7620]: I0318 08:52:19.310678 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.314011 master-0 kubenswrapper[7620]: I0318 08:52:19.313967 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-222ht" Mar 18 08:52:19.314264 master-0 kubenswrapper[7620]: I0318 08:52:19.314230 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 08:52:19.326566 master-0 kubenswrapper[7620]: I0318 08:52:19.326036 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjrfz\" (UniqueName: \"kubernetes.io/projected/a7dab805-612b-404c-ab97-8cee927169db-kube-api-access-pjrfz\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.326566 master-0 kubenswrapper[7620]: I0318 08:52:19.326149 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.326566 master-0 kubenswrapper[7620]: I0318 08:52:19.326237 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.334207 master-0 kubenswrapper[7620]: I0318 08:52:19.334139 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7dab805-612b-404c-ab97-8cee927169db-rootfs\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.435894 master-0 kubenswrapper[7620]: I0318 08:52:19.435833 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjrfz\" (UniqueName: \"kubernetes.io/projected/a7dab805-612b-404c-ab97-8cee927169db-kube-api-access-pjrfz\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.436187 master-0 kubenswrapper[7620]: I0318 08:52:19.436166 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.436277 master-0 kubenswrapper[7620]: I0318 08:52:19.436265 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.436344 master-0 kubenswrapper[7620]: I0318 08:52:19.436332 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7dab805-612b-404c-ab97-8cee927169db-rootfs\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.436492 master-0 kubenswrapper[7620]: I0318 08:52:19.436479 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7dab805-612b-404c-ab97-8cee927169db-rootfs\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.446918 master-0 kubenswrapper[7620]: I0318 08:52:19.444288 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.448057 master-0 kubenswrapper[7620]: I0318 08:52:19.447492 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.455016 master-0 kubenswrapper[7620]: I0318 08:52:19.454916 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjrfz\" (UniqueName: \"kubernetes.io/projected/a7dab805-612b-404c-ab97-8cee927169db-kube-api-access-pjrfz\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:19.626413 master-0 kubenswrapper[7620]: I0318 08:52:19.626271 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 08:52:21.371304 master-0 kubenswrapper[7620]: I0318 08:52:21.371224 7620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 08:52:21.374469 master-0 kubenswrapper[7620]: I0318 08:52:21.371495 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d" gracePeriod=30 Mar 18 08:52:21.374469 master-0 kubenswrapper[7620]: I0318 08:52:21.371657 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e" gracePeriod=30 Mar 18 08:52:21.375244 master-0 kubenswrapper[7620]: I0318 08:52:21.375218 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 08:52:21.375474 master-0 kubenswrapper[7620]: E0318 08:52:21.375431 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.375474 master-0 kubenswrapper[7620]: I0318 08:52:21.375454 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.375474 master-0 kubenswrapper[7620]: E0318 08:52:21.375477 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 08:52:21.375673 master-0 kubenswrapper[7620]: I0318 08:52:21.375486 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 08:52:21.375673 master-0 kubenswrapper[7620]: E0318 08:52:21.375550 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.375673 master-0 kubenswrapper[7620]: I0318 08:52:21.375561 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.375673 master-0 kubenswrapper[7620]: E0318 08:52:21.375573 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.375673 master-0 kubenswrapper[7620]: I0318 08:52:21.375581 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.380071 master-0 kubenswrapper[7620]: I0318 08:52:21.380018 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.380071 master-0 kubenswrapper[7620]: I0318 08:52:21.380042 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.380071 master-0 kubenswrapper[7620]: I0318 08:52:21.380055 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 08:52:21.380071 master-0 kubenswrapper[7620]: I0318 08:52:21.380069 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.380296 master-0 kubenswrapper[7620]: E0318 08:52:21.380175 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.380296 master-0 kubenswrapper[7620]: I0318 08:52:21.380183 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.380296 master-0 kubenswrapper[7620]: I0318 08:52:21.380261 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 08:52:21.386953 master-0 kubenswrapper[7620]: I0318 08:52:21.383939 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:21.466987 master-0 kubenswrapper[7620]: I0318 08:52:21.466932 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f18b861b5b8c9ec3c738abc65d93de21\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:21.468578 master-0 kubenswrapper[7620]: I0318 08:52:21.467236 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f18b861b5b8c9ec3c738abc65d93de21\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:21.479013 master-0 kubenswrapper[7620]: I0318 08:52:21.478976 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 08:52:21.570569 master-0 kubenswrapper[7620]: I0318 08:52:21.570505 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f18b861b5b8c9ec3c738abc65d93de21\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:21.571136 master-0 kubenswrapper[7620]: I0318 08:52:21.571099 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f18b861b5b8c9ec3c738abc65d93de21\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:21.571421 master-0 kubenswrapper[7620]: I0318 08:52:21.571001 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f18b861b5b8c9ec3c738abc65d93de21\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:21.571635 master-0 kubenswrapper[7620]: I0318 08:52:21.571399 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f18b861b5b8c9ec3c738abc65d93de21\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:21.715737 master-0 kubenswrapper[7620]: I0318 08:52:21.715651 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:52:21.715737 master-0 kubenswrapper[7620]: W0318 08:52:21.715695 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7dab805_612b_404c_ab97_8cee927169db.slice/crio-3c7483d94d4b729fb2442b8f5c55aceeebc0aac5c97dd559a0179898c48164c2 WatchSource:0}: Error finding container 3c7483d94d4b729fb2442b8f5c55aceeebc0aac5c97dd559a0179898c48164c2: Status 404 returned error can't find the container with id 3c7483d94d4b729fb2442b8f5c55aceeebc0aac5c97dd559a0179898c48164c2 Mar 18 08:52:21.770325 master-0 kubenswrapper[7620]: I0318 08:52:21.770268 7620 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="2f1c6b2a-bf64-4a35-8d58-f7a5268bb45f" Mar 18 08:52:21.773404 master-0 kubenswrapper[7620]: I0318 08:52:21.773366 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 08:52:21.773457 master-0 kubenswrapper[7620]: I0318 08:52:21.773414 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 08:52:21.773500 master-0 kubenswrapper[7620]: I0318 08:52:21.773472 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 08:52:21.773625 master-0 kubenswrapper[7620]: I0318 08:52:21.773600 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 08:52:21.773657 master-0 kubenswrapper[7620]: I0318 08:52:21.773633 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 08:52:21.773984 master-0 kubenswrapper[7620]: I0318 08:52:21.773959 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:52:21.774037 master-0 kubenswrapper[7620]: I0318 08:52:21.773967 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:52:21.774037 master-0 kubenswrapper[7620]: I0318 08:52:21.774001 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:52:21.774140 master-0 kubenswrapper[7620]: I0318 08:52:21.774071 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:52:21.774193 master-0 kubenswrapper[7620]: I0318 08:52:21.773967 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:52:21.776847 master-0 kubenswrapper[7620]: I0318 08:52:21.776800 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:21.875342 master-0 kubenswrapper[7620]: I0318 08:52:21.875294 7620 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:21.875342 master-0 kubenswrapper[7620]: I0318 08:52:21.875336 7620 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:21.875504 master-0 kubenswrapper[7620]: I0318 08:52:21.875353 7620 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:21.875504 master-0 kubenswrapper[7620]: I0318 08:52:21.875366 7620 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:21.875504 master-0 kubenswrapper[7620]: I0318 08:52:21.875380 7620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:22.252603 master-0 kubenswrapper[7620]: I0318 08:52:22.252465 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 18 08:52:22.252961 master-0 kubenswrapper[7620]: I0318 08:52:22.252938 7620 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 18 08:52:22.273225 master-0 kubenswrapper[7620]: I0318 08:52:22.272704 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 08:52:22.273225 master-0 kubenswrapper[7620]: I0318 08:52:22.272740 7620 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="2f1c6b2a-bf64-4a35-8d58-f7a5268bb45f" Mar 18 08:52:22.273225 master-0 kubenswrapper[7620]: I0318 08:52:22.272759 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 08:52:22.273225 master-0 kubenswrapper[7620]: I0318 08:52:22.272770 7620 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="2f1c6b2a-bf64-4a35-8d58-f7a5268bb45f" Mar 18 08:52:22.335959 master-0 kubenswrapper[7620]: I0318 08:52:22.335906 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" event={"ID":"ffc5379c-651f-490c-90f4-1285b9093596","Type":"ContainerStarted","Data":"39bb9c423e5c616bee3c5ce41c941400eb1d52b4d79508f2cd264467fa6b6f35"} Mar 18 08:52:22.351871 master-0 kubenswrapper[7620]: I0318 08:52:22.351303 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f18b861b5b8c9ec3c738abc65d93de21","Type":"ContainerStarted","Data":"bbabe017e89f6ea54b729f4482f01a624a5bb89f74c49b1b8e5588070c02358c"} Mar 18 08:52:22.351871 master-0 kubenswrapper[7620]: I0318 08:52:22.351355 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f18b861b5b8c9ec3c738abc65d93de21","Type":"ContainerStarted","Data":"cfd69af88774e22c3d70940f7a0ea66641ee8b20b79b65a1fbb3869389de22e6"} Mar 18 08:52:22.356514 master-0 kubenswrapper[7620]: I0318 08:52:22.355330 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jg58c" event={"ID":"f65344cd-8571-4a78-927f-eec46ec1af51","Type":"ContainerStarted","Data":"130035ae9d4a4bb44021c5b99df33193fea4eeadfd7275e65083917ba23f50ab"} Mar 18 08:52:22.357915 master-0 kubenswrapper[7620]: I0318 08:52:22.357215 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" event={"ID":"31a92270-efed-44fe-871e-90333235e85f","Type":"ContainerStarted","Data":"2a70bf831c57a816b02d9a0d854f71978652f1430d1a9473577f1fb1b5332d0f"} Mar 18 08:52:22.359009 master-0 kubenswrapper[7620]: I0318 08:52:22.358981 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" event={"ID":"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4","Type":"ContainerStarted","Data":"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680"} Mar 18 08:52:22.360948 master-0 kubenswrapper[7620]: I0318 08:52:22.360917 7620 generic.go:334] "Generic (PLEG): container finished" podID="28d2bb97-ff93-4772-96fd-318fa62e3a87" containerID="cf9e9bddbf3499401835a2ff896142cd9409d0448e901ff2faa3c5fb21f85146" exitCode=0 Mar 18 08:52:22.361016 master-0 kubenswrapper[7620]: I0318 08:52:22.360972 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"28d2bb97-ff93-4772-96fd-318fa62e3a87","Type":"ContainerDied","Data":"cf9e9bddbf3499401835a2ff896142cd9409d0448e901ff2faa3c5fb21f85146"} Mar 18 08:52:22.376003 master-0 kubenswrapper[7620]: I0318 08:52:22.369961 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vng9w" event={"ID":"a268d595-18c2-43a2-8ed5-eb64c76c490f","Type":"ContainerStarted","Data":"1e4e3f01d07f8c1030ee6139e1333dd2eb1d0183509c0e7ff19d0240d75eda24"} Mar 18 08:52:22.380692 master-0 kubenswrapper[7620]: I0318 08:52:22.380638 7620 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e" exitCode=0 Mar 18 08:52:22.380692 master-0 kubenswrapper[7620]: I0318 08:52:22.380666 7620 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d" exitCode=0 Mar 18 08:52:22.380950 master-0 kubenswrapper[7620]: I0318 08:52:22.380729 7620 scope.go:117] "RemoveContainer" containerID="6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e" Mar 18 08:52:22.380950 master-0 kubenswrapper[7620]: I0318 08:52:22.380871 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:52:22.394006 master-0 kubenswrapper[7620]: I0318 08:52:22.387508 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk9z9" event={"ID":"52e32e2d-33ab-4351-ae8a-80acd6077d70","Type":"ContainerStarted","Data":"8d7ae1f4f1ee284508f3721707fda898f222d1d61e36e154c909c34e50a59f8d"} Mar 18 08:52:22.394006 master-0 kubenswrapper[7620]: I0318 08:52:22.389831 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qsj46" event={"ID":"a7dab805-612b-404c-ab97-8cee927169db","Type":"ContainerStarted","Data":"47076ab0cbc7b4e2b581923496ef5b925a7082f5c8404e664fb04fb24769bd76"} Mar 18 08:52:22.394006 master-0 kubenswrapper[7620]: I0318 08:52:22.389866 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qsj46" event={"ID":"a7dab805-612b-404c-ab97-8cee927169db","Type":"ContainerStarted","Data":"5b8f4b83cc80fa7d9e871dbf12e2aaff1314947aacde6e1e83e96026611cba92"} Mar 18 08:52:22.394006 master-0 kubenswrapper[7620]: I0318 08:52:22.389878 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qsj46" event={"ID":"a7dab805-612b-404c-ab97-8cee927169db","Type":"ContainerStarted","Data":"3c7483d94d4b729fb2442b8f5c55aceeebc0aac5c97dd559a0179898c48164c2"} Mar 18 08:52:22.402109 master-0 kubenswrapper[7620]: I0318 08:52:22.402045 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" podStartSLOduration=2.137716132 podStartE2EDuration="7.402022055s" podCreationTimestamp="2026-03-18 08:52:15 +0000 UTC" firstStartedPulling="2026-03-18 08:52:16.439524896 +0000 UTC m=+200.434306648" lastFinishedPulling="2026-03-18 08:52:21.703830809 +0000 UTC m=+205.698612571" observedRunningTime="2026-03-18 08:52:22.365193455 +0000 UTC m=+206.359975207" watchObservedRunningTime="2026-03-18 08:52:22.402022055 +0000 UTC m=+206.396803807" Mar 18 08:52:22.405521 master-0 kubenswrapper[7620]: I0318 08:52:22.405398 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" event={"ID":"fc5a9875-d97e-4371-a15d-a1f43b85abce","Type":"ContainerStarted","Data":"9f51008c154b03b0ad6c1238f35d1c274d0ce2e09335d89d6b870de378bc3b70"} Mar 18 08:52:22.412804 master-0 kubenswrapper[7620]: I0318 08:52:22.412754 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jg58c" podStartSLOduration=27.735431562 podStartE2EDuration="33.412733435s" podCreationTimestamp="2026-03-18 08:51:49 +0000 UTC" firstStartedPulling="2026-03-18 08:52:16.102312926 +0000 UTC m=+200.097094698" lastFinishedPulling="2026-03-18 08:52:21.779614809 +0000 UTC m=+205.774396571" observedRunningTime="2026-03-18 08:52:22.409051102 +0000 UTC m=+206.403832874" watchObservedRunningTime="2026-03-18 08:52:22.412733435 +0000 UTC m=+206.407515187" Mar 18 08:52:22.424251 master-0 kubenswrapper[7620]: I0318 08:52:22.424197 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" podStartSLOduration=1.9110023630000001 podStartE2EDuration="7.424181605s" podCreationTimestamp="2026-03-18 08:52:15 +0000 UTC" firstStartedPulling="2026-03-18 08:52:16.26300091 +0000 UTC m=+200.257782662" lastFinishedPulling="2026-03-18 08:52:21.776180142 +0000 UTC m=+205.770961904" observedRunningTime="2026-03-18 08:52:22.422061496 +0000 UTC m=+206.416843248" watchObservedRunningTime="2026-03-18 08:52:22.424181605 +0000 UTC m=+206.418963357" Mar 18 08:52:22.450446 master-0 kubenswrapper[7620]: I0318 08:52:22.450320 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" podStartSLOduration=1.7233614849999999 podStartE2EDuration="7.450303456s" podCreationTimestamp="2026-03-18 08:52:15 +0000 UTC" firstStartedPulling="2026-03-18 08:52:16.030304202 +0000 UTC m=+200.025085954" lastFinishedPulling="2026-03-18 08:52:21.757246173 +0000 UTC m=+205.752027925" observedRunningTime="2026-03-18 08:52:22.447433955 +0000 UTC m=+206.442215707" watchObservedRunningTime="2026-03-18 08:52:22.450303456 +0000 UTC m=+206.445085208" Mar 18 08:52:22.467212 master-0 kubenswrapper[7620]: I0318 08:52:22.466803 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pk9z9" podStartSLOduration=23.780929365 podStartE2EDuration="29.466786577s" podCreationTimestamp="2026-03-18 08:51:53 +0000 UTC" firstStartedPulling="2026-03-18 08:52:16.093795998 +0000 UTC m=+200.088577750" lastFinishedPulling="2026-03-18 08:52:21.77965319 +0000 UTC m=+205.774434962" observedRunningTime="2026-03-18 08:52:22.465928473 +0000 UTC m=+206.460710225" watchObservedRunningTime="2026-03-18 08:52:22.466786577 +0000 UTC m=+206.461568329" Mar 18 08:52:22.497344 master-0 kubenswrapper[7620]: I0318 08:52:22.497250 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vng9w" podStartSLOduration=24.855217423 podStartE2EDuration="30.497222628s" podCreationTimestamp="2026-03-18 08:51:52 +0000 UTC" firstStartedPulling="2026-03-18 08:52:16.116462982 +0000 UTC m=+200.111244734" lastFinishedPulling="2026-03-18 08:52:21.758468187 +0000 UTC m=+205.753249939" observedRunningTime="2026-03-18 08:52:22.490675385 +0000 UTC m=+206.485457157" watchObservedRunningTime="2026-03-18 08:52:22.497222628 +0000 UTC m=+206.492004400" Mar 18 08:52:22.917343 master-0 kubenswrapper[7620]: I0318 08:52:22.917268 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qsj46" podStartSLOduration=3.917241695 podStartE2EDuration="3.917241695s" podCreationTimestamp="2026-03-18 08:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:52:22.913621283 +0000 UTC m=+206.908403035" watchObservedRunningTime="2026-03-18 08:52:22.917241695 +0000 UTC m=+206.912023467" Mar 18 08:52:23.417880 master-0 kubenswrapper[7620]: I0318 08:52:23.416031 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" event={"ID":"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4","Type":"ContainerStarted","Data":"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db"} Mar 18 08:52:23.418418 master-0 kubenswrapper[7620]: I0318 08:52:23.417962 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f18b861b5b8c9ec3c738abc65d93de21","Type":"ContainerStarted","Data":"69596b626529595f36c9ff264c03689b43e4c44d0adc36ba6d7b5f545138ce9f"} Mar 18 08:52:24.503993 master-0 kubenswrapper[7620]: I0318 08:52:24.503923 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:24.503993 master-0 kubenswrapper[7620]: I0318 08:52:24.503994 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:24.536547 master-0 kubenswrapper[7620]: I0318 08:52:24.529870 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:24.536547 master-0 kubenswrapper[7620]: I0318 08:52:24.529915 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:24.553895 master-0 kubenswrapper[7620]: I0318 08:52:24.553111 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:24.565128 master-0 kubenswrapper[7620]: I0318 08:52:24.565067 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:24.565534 master-0 kubenswrapper[7620]: I0318 08:52:24.565233 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:24.592235 master-0 kubenswrapper[7620]: I0318 08:52:24.592180 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:24.592235 master-0 kubenswrapper[7620]: I0318 08:52:24.592230 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:24.620901 master-0 kubenswrapper[7620]: I0318 08:52:24.620815 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:25.483286 master-0 kubenswrapper[7620]: I0318 08:52:25.482910 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-78szh" Mar 18 08:52:25.574336 master-0 kubenswrapper[7620]: I0318 08:52:25.574282 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pk9z9" podUID="52e32e2d-33ab-4351-ae8a-80acd6077d70" containerName="registry-server" probeResult="failure" output=< Mar 18 08:52:25.574336 master-0 kubenswrapper[7620]: timeout: failed to connect service ":50051" within 1s Mar 18 08:52:25.574336 master-0 kubenswrapper[7620]: > Mar 18 08:52:25.645060 master-0 kubenswrapper[7620]: I0318 08:52:25.642523 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jg58c" podUID="f65344cd-8571-4a78-927f-eec46ec1af51" containerName="registry-server" probeResult="failure" output=< Mar 18 08:52:25.645060 master-0 kubenswrapper[7620]: timeout: failed to connect service ":50051" within 1s Mar 18 08:52:25.645060 master-0 kubenswrapper[7620]: > Mar 18 08:52:26.500431 master-0 kubenswrapper[7620]: I0318 08:52:26.500160 7620 scope.go:117] "RemoveContainer" containerID="f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a" Mar 18 08:52:26.529013 master-0 kubenswrapper[7620]: I0318 08:52:26.528634 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:52:26.555208 master-0 kubenswrapper[7620]: I0318 08:52:26.555085 7620 scope.go:117] "RemoveContainer" containerID="a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d" Mar 18 08:52:26.599734 master-0 kubenswrapper[7620]: I0318 08:52:26.599611 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access\") pod \"28d2bb97-ff93-4772-96fd-318fa62e3a87\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " Mar 18 08:52:26.600963 master-0 kubenswrapper[7620]: I0318 08:52:26.600943 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-kubelet-dir\") pod \"28d2bb97-ff93-4772-96fd-318fa62e3a87\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " Mar 18 08:52:26.601090 master-0 kubenswrapper[7620]: I0318 08:52:26.601074 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-var-lock\") pod \"28d2bb97-ff93-4772-96fd-318fa62e3a87\" (UID: \"28d2bb97-ff93-4772-96fd-318fa62e3a87\") " Mar 18 08:52:26.601653 master-0 kubenswrapper[7620]: I0318 08:52:26.601630 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-var-lock" (OuterVolumeSpecName: "var-lock") pod "28d2bb97-ff93-4772-96fd-318fa62e3a87" (UID: "28d2bb97-ff93-4772-96fd-318fa62e3a87"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:52:26.602490 master-0 kubenswrapper[7620]: I0318 08:52:26.602426 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "28d2bb97-ff93-4772-96fd-318fa62e3a87" (UID: "28d2bb97-ff93-4772-96fd-318fa62e3a87"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:52:26.607683 master-0 kubenswrapper[7620]: I0318 08:52:26.607632 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "28d2bb97-ff93-4772-96fd-318fa62e3a87" (UID: "28d2bb97-ff93-4772-96fd-318fa62e3a87"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:52:26.611997 master-0 kubenswrapper[7620]: I0318 08:52:26.611956 7620 scope.go:117] "RemoveContainer" containerID="6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e" Mar 18 08:52:26.612584 master-0 kubenswrapper[7620]: E0318 08:52:26.612548 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e\": container with ID starting with 6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e not found: ID does not exist" containerID="6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e" Mar 18 08:52:26.612650 master-0 kubenswrapper[7620]: I0318 08:52:26.612589 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e"} err="failed to get container status \"6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e\": rpc error: code = NotFound desc = could not find container \"6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e\": container with ID starting with 6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e not found: ID does not exist" Mar 18 08:52:26.612650 master-0 kubenswrapper[7620]: I0318 08:52:26.612620 7620 scope.go:117] "RemoveContainer" containerID="f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a" Mar 18 08:52:26.612949 master-0 kubenswrapper[7620]: E0318 08:52:26.612911 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a\": container with ID starting with f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a not found: ID does not exist" containerID="f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a" Mar 18 08:52:26.613011 master-0 kubenswrapper[7620]: I0318 08:52:26.612949 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a"} err="failed to get container status \"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a\": rpc error: code = NotFound desc = could not find container \"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a\": container with ID starting with f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a not found: ID does not exist" Mar 18 08:52:26.613011 master-0 kubenswrapper[7620]: I0318 08:52:26.612969 7620 scope.go:117] "RemoveContainer" containerID="a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d" Mar 18 08:52:26.613463 master-0 kubenswrapper[7620]: E0318 08:52:26.613429 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d\": container with ID starting with a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d not found: ID does not exist" containerID="a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d" Mar 18 08:52:26.613519 master-0 kubenswrapper[7620]: I0318 08:52:26.613464 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d"} err="failed to get container status \"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d\": rpc error: code = NotFound desc = could not find container \"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d\": container with ID starting with a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d not found: ID does not exist" Mar 18 08:52:26.613519 master-0 kubenswrapper[7620]: I0318 08:52:26.613481 7620 scope.go:117] "RemoveContainer" containerID="6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e" Mar 18 08:52:26.613812 master-0 kubenswrapper[7620]: I0318 08:52:26.613767 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e"} err="failed to get container status \"6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e\": rpc error: code = NotFound desc = could not find container \"6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e\": container with ID starting with 6d5b56ac8d5867b35015e9d68581180a0a4fa40297611f5fe968b22c150b744e not found: ID does not exist" Mar 18 08:52:26.613890 master-0 kubenswrapper[7620]: I0318 08:52:26.613816 7620 scope.go:117] "RemoveContainer" containerID="f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a" Mar 18 08:52:26.614334 master-0 kubenswrapper[7620]: I0318 08:52:26.614293 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a"} err="failed to get container status \"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a\": rpc error: code = NotFound desc = could not find container \"f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a\": container with ID starting with f1af4029f448c1f86ba8be3065c1894f4cf3dd4cc201c45f5bc2f5936a17b71a not found: ID does not exist" Mar 18 08:52:26.614334 master-0 kubenswrapper[7620]: I0318 08:52:26.614324 7620 scope.go:117] "RemoveContainer" containerID="a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d" Mar 18 08:52:26.614630 master-0 kubenswrapper[7620]: I0318 08:52:26.614602 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d"} err="failed to get container status \"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d\": rpc error: code = NotFound desc = could not find container \"a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d\": container with ID starting with a7d9a0f0d0d5483aab47d6b6dab06e78206dae99e396716861bbffb1ded6479d not found: ID does not exist" Mar 18 08:52:26.703175 master-0 kubenswrapper[7620]: I0318 08:52:26.703077 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28d2bb97-ff93-4772-96fd-318fa62e3a87-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:26.703282 master-0 kubenswrapper[7620]: I0318 08:52:26.703166 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:26.703282 master-0 kubenswrapper[7620]: I0318 08:52:26.703213 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28d2bb97-ff93-4772-96fd-318fa62e3a87-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:27.456342 master-0 kubenswrapper[7620]: I0318 08:52:27.456271 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"28d2bb97-ff93-4772-96fd-318fa62e3a87","Type":"ContainerDied","Data":"a0506e567232af6a1d871e8bdc27ad4000f63b8618b9625c8e1c8682da50383b"} Mar 18 08:52:27.456342 master-0 kubenswrapper[7620]: I0318 08:52:27.456321 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 08:52:27.457298 master-0 kubenswrapper[7620]: I0318 08:52:27.456321 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0506e567232af6a1d871e8bdc27ad4000f63b8618b9625c8e1c8682da50383b" Mar 18 08:52:27.464096 master-0 kubenswrapper[7620]: I0318 08:52:27.464024 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" event={"ID":"b9768e50-c883-47b0-b319-851fa53ac19a","Type":"ContainerStarted","Data":"a762b49ada53b6e0ad3eb3e9d39dde132b23273f1ff019153b051c1759e3813e"} Mar 18 08:52:27.467108 master-0 kubenswrapper[7620]: I0318 08:52:27.467075 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" event={"ID":"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4","Type":"ContainerStarted","Data":"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3"} Mar 18 08:52:27.470995 master-0 kubenswrapper[7620]: I0318 08:52:27.470883 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f18b861b5b8c9ec3c738abc65d93de21","Type":"ContainerStarted","Data":"fd10dceb0449c26d02e61b6f927511258c3ac41149782386de78284480c8fc4d"} Mar 18 08:52:27.470995 master-0 kubenswrapper[7620]: I0318 08:52:27.470912 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f18b861b5b8c9ec3c738abc65d93de21","Type":"ContainerStarted","Data":"c06f0e093df7004eb449f4d313d5c8483347978fe6cb23024b5393882adf8f4a"} Mar 18 08:52:27.786978 master-0 kubenswrapper[7620]: I0318 08:52:27.786872 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=6.786838359 podStartE2EDuration="6.786838359s" podCreationTimestamp="2026-03-18 08:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:52:27.784398571 +0000 UTC m=+211.779180363" watchObservedRunningTime="2026-03-18 08:52:27.786838359 +0000 UTC m=+211.781620111" Mar 18 08:52:27.788081 master-0 kubenswrapper[7620]: I0318 08:52:27.788030 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" podStartSLOduration=2.727737952 podStartE2EDuration="12.788020762s" podCreationTimestamp="2026-03-18 08:52:15 +0000 UTC" firstStartedPulling="2026-03-18 08:52:16.558838713 +0000 UTC m=+200.553620465" lastFinishedPulling="2026-03-18 08:52:26.619121523 +0000 UTC m=+210.613903275" observedRunningTime="2026-03-18 08:52:27.763158887 +0000 UTC m=+211.757940679" watchObservedRunningTime="2026-03-18 08:52:27.788020762 +0000 UTC m=+211.782802514" Mar 18 08:52:27.808753 master-0 kubenswrapper[7620]: I0318 08:52:27.808652 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" podStartSLOduration=6.568361462 podStartE2EDuration="12.808621598s" podCreationTimestamp="2026-03-18 08:52:15 +0000 UTC" firstStartedPulling="2026-03-18 08:52:15.53569315 +0000 UTC m=+199.530474902" lastFinishedPulling="2026-03-18 08:52:21.775953286 +0000 UTC m=+205.770735038" observedRunningTime="2026-03-18 08:52:27.80474485 +0000 UTC m=+211.799526662" watchObservedRunningTime="2026-03-18 08:52:27.808621598 +0000 UTC m=+211.803403380" Mar 18 08:52:31.777611 master-0 kubenswrapper[7620]: I0318 08:52:31.777523 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:31.777611 master-0 kubenswrapper[7620]: I0318 08:52:31.777624 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:31.779085 master-0 kubenswrapper[7620]: I0318 08:52:31.779034 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:31.779246 master-0 kubenswrapper[7620]: I0318 08:52:31.779095 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:31.786061 master-0 kubenswrapper[7620]: I0318 08:52:31.785994 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:31.787603 master-0 kubenswrapper[7620]: I0318 08:52:31.787536 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:32.514702 master-0 kubenswrapper[7620]: I0318 08:52:32.514642 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:32.516043 master-0 kubenswrapper[7620]: I0318 08:52:32.515996 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:52:32.765593 master-0 kubenswrapper[7620]: I0318 08:52:32.765437 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-5g8tz_c110b293-2c6b-496b-b015-23aada98cb4b/authentication-operator/0.log" Mar 18 08:52:32.777212 master-0 kubenswrapper[7620]: I0318 08:52:32.777138 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-5g8tz_c110b293-2c6b-496b-b015-23aada98cb4b/authentication-operator/1.log" Mar 18 08:52:33.100001 master-0 kubenswrapper[7620]: I0318 08:52:33.099814 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-556c8fbcff-5shs8_2700f537-8f31-4380-a527-3e697a8122cc/fix-audit-permissions/0.log" Mar 18 08:52:33.306883 master-0 kubenswrapper[7620]: I0318 08:52:33.306807 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-556c8fbcff-5shs8_2700f537-8f31-4380-a527-3e697a8122cc/oauth-apiserver/0.log" Mar 18 08:52:33.507576 master-0 kubenswrapper[7620]: I0318 08:52:33.507495 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lxj7x_ffc5379c-651f-490c-90f4-1285b9093596/kube-rbac-proxy/0.log" Mar 18 08:52:33.701948 master-0 kubenswrapper[7620]: I0318 08:52:33.701905 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lxj7x_ffc5379c-651f-490c-90f4-1285b9093596/cluster-autoscaler-operator/0.log" Mar 18 08:52:33.900936 master-0 kubenswrapper[7620]: I0318 08:52:33.900762 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/0.log" Mar 18 08:52:34.104125 master-0 kubenswrapper[7620]: I0318 08:52:34.104042 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/baremetal-kube-rbac-proxy/0.log" Mar 18 08:52:34.302454 master-0 kubenswrapper[7620]: I0318 08:52:34.301944 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-z9n9c_d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/control-plane-machine-set-operator/0.log" Mar 18 08:52:34.507127 master-0 kubenswrapper[7620]: I0318 08:52:34.507032 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-z6nw9_b9768e50-c883-47b0-b319-851fa53ac19a/kube-rbac-proxy/0.log" Mar 18 08:52:34.599588 master-0 kubenswrapper[7620]: I0318 08:52:34.599466 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:34.665998 master-0 kubenswrapper[7620]: I0318 08:52:34.665928 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 08:52:34.669240 master-0 kubenswrapper[7620]: I0318 08:52:34.669201 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:34.673909 master-0 kubenswrapper[7620]: I0318 08:52:34.673223 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 08:52:34.702055 master-0 kubenswrapper[7620]: I0318 08:52:34.701970 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-z6nw9_b9768e50-c883-47b0-b319-851fa53ac19a/machine-api-operator/0.log" Mar 18 08:52:34.727702 master-0 kubenswrapper[7620]: I0318 08:52:34.727632 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 08:52:34.904498 master-0 kubenswrapper[7620]: I0318 08:52:34.904326 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f4jvq_939efa41-8f40-4f91-bee4-0425aead9760/etcd-operator/0.log" Mar 18 08:52:35.101054 master-0 kubenswrapper[7620]: I0318 08:52:35.100953 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f4jvq_939efa41-8f40-4f91-bee4-0425aead9760/etcd-operator/1.log" Mar 18 08:52:35.298641 master-0 kubenswrapper[7620]: I0318 08:52:35.298580 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/setup/0.log" Mar 18 08:52:35.498196 master-0 kubenswrapper[7620]: I0318 08:52:35.498115 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-ensure-env-vars/0.log" Mar 18 08:52:35.700497 master-0 kubenswrapper[7620]: I0318 08:52:35.700286 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-resources-copy/0.log" Mar 18 08:52:35.900199 master-0 kubenswrapper[7620]: I0318 08:52:35.900115 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 08:52:36.105807 master-0 kubenswrapper[7620]: I0318 08:52:36.105732 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 08:52:36.304201 master-0 kubenswrapper[7620]: I0318 08:52:36.304137 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 08:52:36.500602 master-0 kubenswrapper[7620]: I0318 08:52:36.500503 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-readyz/0.log" Mar 18 08:52:36.700080 master-0 kubenswrapper[7620]: I0318 08:52:36.699985 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 08:52:36.904403 master-0 kubenswrapper[7620]: I0318 08:52:36.904165 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_1ecff6b2-dbd4-4366-873b-2170d0b76c0f/installer/0.log" Mar 18 08:52:37.111199 master-0 kubenswrapper[7620]: I0318 08:52:37.111107 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-jshg7_5982111d-f4c6-4335-9b40-3142758fc2bc/kube-apiserver-operator/0.log" Mar 18 08:52:37.301571 master-0 kubenswrapper[7620]: I0318 08:52:37.301508 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-jshg7_5982111d-f4c6-4335-9b40-3142758fc2bc/kube-apiserver-operator/1.log" Mar 18 08:52:37.510648 master-0 kubenswrapper[7620]: I0318 08:52:37.510576 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 18 08:52:37.702631 master-0 kubenswrapper[7620]: I0318 08:52:37.702475 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 18 08:52:37.897746 master-0 kubenswrapper[7620]: I0318 08:52:37.897662 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 18 08:52:38.106699 master-0 kubenswrapper[7620]: I0318 08:52:38.106654 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1edfa49b-d0e7-4324-aace-b115b41ddae0/installer/0.log" Mar 18 08:52:38.304286 master-0 kubenswrapper[7620]: I0318 08:52:38.304217 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_28d2bb97-ff93-4772-96fd-318fa62e3a87/installer/0.log" Mar 18 08:52:38.510878 master-0 kubenswrapper[7620]: I0318 08:52:38.510790 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f18b861b5b8c9ec3c738abc65d93de21/kube-controller-manager/0.log" Mar 18 08:52:39.144438 master-0 kubenswrapper[7620]: I0318 08:52:39.144379 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f18b861b5b8c9ec3c738abc65d93de21/cluster-policy-controller/0.log" Mar 18 08:52:39.959274 master-0 kubenswrapper[7620]: I0318 08:52:39.957378 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f18b861b5b8c9ec3c738abc65d93de21/kube-controller-manager-cert-syncer/0.log" Mar 18 08:52:39.968316 master-0 kubenswrapper[7620]: I0318 08:52:39.968262 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f18b861b5b8c9ec3c738abc65d93de21/kube-controller-manager-recovery-controller/0.log" Mar 18 08:52:39.984714 master-0 kubenswrapper[7620]: I0318 08:52:39.984633 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-fxn82_260c8aa5-a288-4ee8-b671-f97e90a2f39c/kube-controller-manager-operator/0.log" Mar 18 08:52:39.993259 master-0 kubenswrapper[7620]: I0318 08:52:39.993214 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-fxn82_260c8aa5-a288-4ee8-b671-f97e90a2f39c/kube-controller-manager-operator/1.log" Mar 18 08:52:40.004562 master-0 kubenswrapper[7620]: I0318 08:52:40.004509 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/0.log" Mar 18 08:52:40.016782 master-0 kubenswrapper[7620]: I0318 08:52:40.016723 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/1.log" Mar 18 08:52:40.380148 master-0 kubenswrapper[7620]: I0318 08:52:40.380102 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c6fb9336-3f19-4220-93ee-a5a61e26340b/installer/0.log" Mar 18 08:52:41.082265 master-0 kubenswrapper[7620]: I0318 08:52:41.082212 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-9p4bb_8a6ab2be-d018-4fd5-bfbb-6b88aec28663/kube-scheduler-operator-container/0.log" Mar 18 08:52:41.093047 master-0 kubenswrapper[7620]: I0318 08:52:41.092997 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-9p4bb_8a6ab2be-d018-4fd5-bfbb-6b88aec28663/kube-scheduler-operator-container/1.log" Mar 18 08:52:41.106905 master-0 kubenswrapper[7620]: I0318 08:52:41.106813 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-w4t7x_fcf89a76-7a94-46d3-853e-68e986563764/openshift-apiserver-operator/0.log" Mar 18 08:52:41.115167 master-0 kubenswrapper[7620]: I0318 08:52:41.115126 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-w4t7x_fcf89a76-7a94-46d3-853e-68e986563764/openshift-apiserver-operator/1.log" Mar 18 08:52:41.122544 master-0 kubenswrapper[7620]: I0318 08:52:41.122490 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7bb69b5c5c-djsr9_b5f9f50b-e7b4-4b81-864b-349303f21447/fix-audit-permissions/0.log" Mar 18 08:52:41.307515 master-0 kubenswrapper[7620]: I0318 08:52:41.307456 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7bb69b5c5c-djsr9_b5f9f50b-e7b4-4b81-864b-349303f21447/openshift-apiserver/0.log" Mar 18 08:52:41.505699 master-0 kubenswrapper[7620]: I0318 08:52:41.503259 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7bb69b5c5c-djsr9_b5f9f50b-e7b4-4b81-864b-349303f21447/openshift-apiserver-check-endpoints/0.log" Mar 18 08:52:41.519231 master-0 kubenswrapper[7620]: I0318 08:52:41.519172 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls"] Mar 18 08:52:41.519524 master-0 kubenswrapper[7620]: I0318 08:52:41.519480 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="cluster-cloud-controller-manager" containerID="cri-o://6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680" gracePeriod=30 Mar 18 08:52:41.519672 master-0 kubenswrapper[7620]: I0318 08:52:41.519555 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="kube-rbac-proxy" containerID="cri-o://a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3" gracePeriod=30 Mar 18 08:52:41.521137 master-0 kubenswrapper[7620]: I0318 08:52:41.519882 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="config-sync-controllers" containerID="cri-o://66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db" gracePeriod=30 Mar 18 08:52:41.691713 master-0 kubenswrapper[7620]: I0318 08:52:41.691671 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:41.703750 master-0 kubenswrapper[7620]: I0318 08:52:41.703600 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f4jvq_939efa41-8f40-4f91-bee4-0425aead9760/etcd-operator/0.log" Mar 18 08:52:41.743123 master-0 kubenswrapper[7620]: I0318 08:52:41.743066 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8qnj\" (UniqueName: \"kubernetes.io/projected/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-kube-api-access-x8qnj\") pod \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " Mar 18 08:52:41.743376 master-0 kubenswrapper[7620]: I0318 08:52:41.743150 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-images\") pod \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " Mar 18 08:52:41.743376 master-0 kubenswrapper[7620]: I0318 08:52:41.743192 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-cloud-controller-manager-operator-tls\") pod \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " Mar 18 08:52:41.743376 master-0 kubenswrapper[7620]: I0318 08:52:41.743231 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-host-etc-kube\") pod \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " Mar 18 08:52:41.743376 master-0 kubenswrapper[7620]: I0318 08:52:41.743328 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-auth-proxy-config\") pod \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\" (UID: \"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4\") " Mar 18 08:52:41.743550 master-0 kubenswrapper[7620]: I0318 08:52:41.743475 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" (UID: "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:52:41.743766 master-0 kubenswrapper[7620]: I0318 08:52:41.743713 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-images" (OuterVolumeSpecName: "images") pod "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" (UID: "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:52:41.743924 master-0 kubenswrapper[7620]: I0318 08:52:41.743875 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" (UID: "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:52:41.743924 master-0 kubenswrapper[7620]: I0318 08:52:41.743889 7620 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:41.746547 master-0 kubenswrapper[7620]: I0318 08:52:41.746500 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" (UID: "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:52:41.746633 master-0 kubenswrapper[7620]: I0318 08:52:41.746591 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-kube-api-access-x8qnj" (OuterVolumeSpecName: "kube-api-access-x8qnj") pod "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" (UID: "e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4"). InnerVolumeSpecName "kube-api-access-x8qnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:52:41.845382 master-0 kubenswrapper[7620]: I0318 08:52:41.845249 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8qnj\" (UniqueName: \"kubernetes.io/projected/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-kube-api-access-x8qnj\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:41.845382 master-0 kubenswrapper[7620]: I0318 08:52:41.845316 7620 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-images\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:41.845382 master-0 kubenswrapper[7620]: I0318 08:52:41.845338 7620 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:41.845382 master-0 kubenswrapper[7620]: I0318 08:52:41.845359 7620 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:52:41.900766 master-0 kubenswrapper[7620]: I0318 08:52:41.900681 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f4jvq_939efa41-8f40-4f91-bee4-0425aead9760/etcd-operator/1.log" Mar 18 08:52:42.104222 master-0 kubenswrapper[7620]: I0318 08:52:42.104088 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-r758j_772bc250-2e57-4ce0-883c-d44281fcb0be/openshift-controller-manager-operator/0.log" Mar 18 08:52:42.299835 master-0 kubenswrapper[7620]: I0318 08:52:42.299776 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-r758j_772bc250-2e57-4ce0-883c-d44281fcb0be/openshift-controller-manager-operator/1.log" Mar 18 08:52:42.510512 master-0 kubenswrapper[7620]: I0318 08:52:42.510439 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6448dc88d8-cnd9q_4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/controller-manager/0.log" Mar 18 08:52:42.624765 master-0 kubenswrapper[7620]: I0318 08:52:42.624720 7620 generic.go:334] "Generic (PLEG): container finished" podID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerID="a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3" exitCode=0 Mar 18 08:52:42.625068 master-0 kubenswrapper[7620]: I0318 08:52:42.625045 7620 generic.go:334] "Generic (PLEG): container finished" podID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerID="66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db" exitCode=0 Mar 18 08:52:42.625173 master-0 kubenswrapper[7620]: I0318 08:52:42.625156 7620 generic.go:334] "Generic (PLEG): container finished" podID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerID="6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680" exitCode=0 Mar 18 08:52:42.625264 master-0 kubenswrapper[7620]: I0318 08:52:42.624829 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" Mar 18 08:52:42.625445 master-0 kubenswrapper[7620]: I0318 08:52:42.624839 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" event={"ID":"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4","Type":"ContainerDied","Data":"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3"} Mar 18 08:52:42.625521 master-0 kubenswrapper[7620]: I0318 08:52:42.625488 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" event={"ID":"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4","Type":"ContainerDied","Data":"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db"} Mar 18 08:52:42.625669 master-0 kubenswrapper[7620]: I0318 08:52:42.625513 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" event={"ID":"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4","Type":"ContainerDied","Data":"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680"} Mar 18 08:52:42.625669 master-0 kubenswrapper[7620]: I0318 08:52:42.625536 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls" event={"ID":"e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4","Type":"ContainerDied","Data":"ddac4a396028feae59dbc61cc740a3f14012ee9a158265e6a666c8a8e0d16068"} Mar 18 08:52:42.625669 master-0 kubenswrapper[7620]: I0318 08:52:42.625569 7620 scope.go:117] "RemoveContainer" containerID="a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3" Mar 18 08:52:42.647929 master-0 kubenswrapper[7620]: I0318 08:52:42.647875 7620 scope.go:117] "RemoveContainer" containerID="66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db" Mar 18 08:52:42.652454 master-0 kubenswrapper[7620]: I0318 08:52:42.652399 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls"] Mar 18 08:52:42.654961 master-0 kubenswrapper[7620]: I0318 08:52:42.654774 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-nbdls"] Mar 18 08:52:42.667649 master-0 kubenswrapper[7620]: I0318 08:52:42.667572 7620 scope.go:117] "RemoveContainer" containerID="6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680" Mar 18 08:52:42.691940 master-0 kubenswrapper[7620]: I0318 08:52:42.691719 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls"] Mar 18 08:52:42.692165 master-0 kubenswrapper[7620]: E0318 08:52:42.692022 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="kube-rbac-proxy" Mar 18 08:52:42.692165 master-0 kubenswrapper[7620]: I0318 08:52:42.692040 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="kube-rbac-proxy" Mar 18 08:52:42.692165 master-0 kubenswrapper[7620]: E0318 08:52:42.692076 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d2bb97-ff93-4772-96fd-318fa62e3a87" containerName="installer" Mar 18 08:52:42.692165 master-0 kubenswrapper[7620]: I0318 08:52:42.692085 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d2bb97-ff93-4772-96fd-318fa62e3a87" containerName="installer" Mar 18 08:52:42.692165 master-0 kubenswrapper[7620]: E0318 08:52:42.692101 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="config-sync-controllers" Mar 18 08:52:42.692165 master-0 kubenswrapper[7620]: I0318 08:52:42.692111 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="config-sync-controllers" Mar 18 08:52:42.692165 master-0 kubenswrapper[7620]: E0318 08:52:42.692129 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="cluster-cloud-controller-manager" Mar 18 08:52:42.692165 master-0 kubenswrapper[7620]: I0318 08:52:42.692137 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="cluster-cloud-controller-manager" Mar 18 08:52:42.692471 master-0 kubenswrapper[7620]: I0318 08:52:42.692252 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d2bb97-ff93-4772-96fd-318fa62e3a87" containerName="installer" Mar 18 08:52:42.692471 master-0 kubenswrapper[7620]: I0318 08:52:42.692275 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="cluster-cloud-controller-manager" Mar 18 08:52:42.692471 master-0 kubenswrapper[7620]: I0318 08:52:42.692296 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="config-sync-controllers" Mar 18 08:52:42.692471 master-0 kubenswrapper[7620]: I0318 08:52:42.692307 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" containerName="kube-rbac-proxy" Mar 18 08:52:42.693031 master-0 kubenswrapper[7620]: I0318 08:52:42.693006 7620 scope.go:117] "RemoveContainer" containerID="a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3" Mar 18 08:52:42.693732 master-0 kubenswrapper[7620]: E0318 08:52:42.693685 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3\": container with ID starting with a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3 not found: ID does not exist" containerID="a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3" Mar 18 08:52:42.693792 master-0 kubenswrapper[7620]: I0318 08:52:42.693745 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3"} err="failed to get container status \"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3\": rpc error: code = NotFound desc = could not find container \"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3\": container with ID starting with a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3 not found: ID does not exist" Mar 18 08:52:42.693837 master-0 kubenswrapper[7620]: I0318 08:52:42.693790 7620 scope.go:117] "RemoveContainer" containerID="66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db" Mar 18 08:52:42.694372 master-0 kubenswrapper[7620]: E0318 08:52:42.694343 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db\": container with ID starting with 66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db not found: ID does not exist" containerID="66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db" Mar 18 08:52:42.694506 master-0 kubenswrapper[7620]: I0318 08:52:42.694472 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db"} err="failed to get container status \"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db\": rpc error: code = NotFound desc = could not find container \"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db\": container with ID starting with 66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db not found: ID does not exist" Mar 18 08:52:42.694599 master-0 kubenswrapper[7620]: I0318 08:52:42.694582 7620 scope.go:117] "RemoveContainer" containerID="6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680" Mar 18 08:52:42.695166 master-0 kubenswrapper[7620]: E0318 08:52:42.695118 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680\": container with ID starting with 6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680 not found: ID does not exist" containerID="6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680" Mar 18 08:52:42.695232 master-0 kubenswrapper[7620]: I0318 08:52:42.695166 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680"} err="failed to get container status \"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680\": rpc error: code = NotFound desc = could not find container \"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680\": container with ID starting with 6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680 not found: ID does not exist" Mar 18 08:52:42.695232 master-0 kubenswrapper[7620]: I0318 08:52:42.695199 7620 scope.go:117] "RemoveContainer" containerID="a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3" Mar 18 08:52:42.695739 master-0 kubenswrapper[7620]: I0318 08:52:42.695583 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3"} err="failed to get container status \"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3\": rpc error: code = NotFound desc = could not find container \"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3\": container with ID starting with a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3 not found: ID does not exist" Mar 18 08:52:42.695871 master-0 kubenswrapper[7620]: I0318 08:52:42.695751 7620 scope.go:117] "RemoveContainer" containerID="66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db" Mar 18 08:52:42.695985 master-0 kubenswrapper[7620]: I0318 08:52:42.695937 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.696431 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db"} err="failed to get container status \"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db\": rpc error: code = NotFound desc = could not find container \"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db\": container with ID starting with 66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db not found: ID does not exist" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.696477 7620 scope.go:117] "RemoveContainer" containerID="6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.696882 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680"} err="failed to get container status \"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680\": rpc error: code = NotFound desc = could not find container \"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680\": container with ID starting with 6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680 not found: ID does not exist" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.696910 7620 scope.go:117] "RemoveContainer" containerID="a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.697228 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3"} err="failed to get container status \"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3\": rpc error: code = NotFound desc = could not find container \"a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3\": container with ID starting with a4a7a914bf20f9ad500ab6630371fd31556d0d11434f50a31e9f9920f8e3d3e3 not found: ID does not exist" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.697273 7620 scope.go:117] "RemoveContainer" containerID="66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.697657 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.697688 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db"} err="failed to get container status \"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db\": rpc error: code = NotFound desc = could not find container \"66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db\": container with ID starting with 66311b69f7911e385ca69572abbbe9c8b5a148d66b7d8e0f3f0aaf9bbff927db not found: ID does not exist" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.697710 7620 scope.go:117] "RemoveContainer" containerID="6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680" Mar 18 08:52:42.698155 master-0 kubenswrapper[7620]: I0318 08:52:42.697997 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680"} err="failed to get container status \"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680\": rpc error: code = NotFound desc = could not find container \"6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680\": container with ID starting with 6f7848a6751e394406efdc8a58af604ae4375a6e45f4df8cf530ddc9c795a680 not found: ID does not exist" Mar 18 08:52:42.699623 master-0 kubenswrapper[7620]: I0318 08:52:42.699590 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 08:52:42.699806 master-0 kubenswrapper[7620]: I0318 08:52:42.699783 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 08:52:42.700398 master-0 kubenswrapper[7620]: I0318 08:52:42.700164 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 08:52:42.700398 master-0 kubenswrapper[7620]: I0318 08:52:42.700347 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:52:42.700544 master-0 kubenswrapper[7620]: I0318 08:52:42.700522 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-68m6c" Mar 18 08:52:42.706138 master-0 kubenswrapper[7620]: I0318 08:52:42.706099 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6448dc88d8-cnd9q_4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/controller-manager/1.log" Mar 18 08:52:42.766921 master-0 kubenswrapper[7620]: I0318 08:52:42.766778 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.766921 master-0 kubenswrapper[7620]: I0318 08:52:42.766834 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ccf74af5-d4fd-4ed3-9784-42397ea798c5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.766921 master-0 kubenswrapper[7620]: I0318 08:52:42.766893 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.767191 master-0 kubenswrapper[7620]: I0318 08:52:42.766981 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.767191 master-0 kubenswrapper[7620]: I0318 08:52:42.767065 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qkd\" (UniqueName: \"kubernetes.io/projected/ccf74af5-d4fd-4ed3-9784-42397ea798c5-kube-api-access-p9qkd\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.868446 master-0 kubenswrapper[7620]: I0318 08:52:42.868383 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9qkd\" (UniqueName: \"kubernetes.io/projected/ccf74af5-d4fd-4ed3-9784-42397ea798c5-kube-api-access-p9qkd\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.868655 master-0 kubenswrapper[7620]: I0318 08:52:42.868465 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.868655 master-0 kubenswrapper[7620]: I0318 08:52:42.868500 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ccf74af5-d4fd-4ed3-9784-42397ea798c5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.868655 master-0 kubenswrapper[7620]: I0318 08:52:42.868536 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.868828 master-0 kubenswrapper[7620]: I0318 08:52:42.868792 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ccf74af5-d4fd-4ed3-9784-42397ea798c5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.868896 master-0 kubenswrapper[7620]: I0318 08:52:42.868795 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.869543 master-0 kubenswrapper[7620]: I0318 08:52:42.869513 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.869591 master-0 kubenswrapper[7620]: I0318 08:52:42.869566 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.871672 master-0 kubenswrapper[7620]: I0318 08:52:42.871633 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.891349 master-0 kubenswrapper[7620]: I0318 08:52:42.891298 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9qkd\" (UniqueName: \"kubernetes.io/projected/ccf74af5-d4fd-4ed3-9784-42397ea798c5-kube-api-access-p9qkd\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:42.900448 master-0 kubenswrapper[7620]: I0318 08:52:42.900398 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-75749f878-qxnvp_04e23989-853e-4b49-ba0f-1961d64ae3a3/route-controller-manager/0.log" Mar 18 08:52:43.026802 master-0 kubenswrapper[7620]: I0318 08:52:43.026667 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 08:52:43.044620 master-0 kubenswrapper[7620]: W0318 08:52:43.044561 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccf74af5_d4fd_4ed3_9784_42397ea798c5.slice/crio-862f349be451274c2786c24620a1b3df5221d5b66e16cc9b0099daecc5ae9693 WatchSource:0}: Error finding container 862f349be451274c2786c24620a1b3df5221d5b66e16cc9b0099daecc5ae9693: Status 404 returned error can't find the container with id 862f349be451274c2786c24620a1b3df5221d5b66e16cc9b0099daecc5ae9693 Mar 18 08:52:43.105566 master-0 kubenswrapper[7620]: I0318 08:52:43.105509 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-swdsh_b065df33-7911-456e-b3a2-1f8c8d53e053/catalog-operator/0.log" Mar 18 08:52:43.301810 master-0 kubenswrapper[7620]: I0318 08:52:43.301760 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5c9796789-sl5kr_3d9fe248-ba87-47e3-911a-1b2b112b5683/olm-operator/0.log" Mar 18 08:52:43.504516 master-0 kubenswrapper[7620]: I0318 08:52:43.504433 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-q8ff6_59d50dd5-6793-4f96-a769-31e086ecc7e4/kube-rbac-proxy/0.log" Mar 18 08:52:43.636027 master-0 kubenswrapper[7620]: I0318 08:52:43.635945 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" event={"ID":"ccf74af5-d4fd-4ed3-9784-42397ea798c5","Type":"ContainerStarted","Data":"186b22d65f0d4470eb32e6b82579dc544a089964b2ec507b602aabe9b3c9e6c1"} Mar 18 08:52:43.636285 master-0 kubenswrapper[7620]: I0318 08:52:43.636049 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" event={"ID":"ccf74af5-d4fd-4ed3-9784-42397ea798c5","Type":"ContainerStarted","Data":"eaad38e5e9adf0c7d9032d4d158adc24f0ed091bb2d04b70f67f104373652877"} Mar 18 08:52:43.636285 master-0 kubenswrapper[7620]: I0318 08:52:43.636082 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" event={"ID":"ccf74af5-d4fd-4ed3-9784-42397ea798c5","Type":"ContainerStarted","Data":"862f349be451274c2786c24620a1b3df5221d5b66e16cc9b0099daecc5ae9693"} Mar 18 08:52:43.708646 master-0 kubenswrapper[7620]: I0318 08:52:43.708599 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-q8ff6_59d50dd5-6793-4f96-a769-31e086ecc7e4/package-server-manager/0.log" Mar 18 08:52:43.902768 master-0 kubenswrapper[7620]: I0318 08:52:43.902718 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-5f48d895dc-ttr9f_1794b726-5c0d-4a72-8ddd-418a2cbd8ded/packageserver/0.log" Mar 18 08:52:44.234176 master-0 kubenswrapper[7620]: I0318 08:52:44.234109 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4" path="/var/lib/kubelet/pods/e1b48980-eb4b-4f86-a84e-8f5ebacbc2d4/volumes" Mar 18 08:52:44.657387 master-0 kubenswrapper[7620]: I0318 08:52:44.657246 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" event={"ID":"ccf74af5-d4fd-4ed3-9784-42397ea798c5","Type":"ContainerStarted","Data":"4fcabc091075ed01d4128c14d6cf814bb78d6e7ada9a825d8bbc0aba80df1cf6"} Mar 18 08:52:44.685723 master-0 kubenswrapper[7620]: I0318 08:52:44.685634 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" podStartSLOduration=2.685608081 podStartE2EDuration="2.685608081s" podCreationTimestamp="2026-03-18 08:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:52:44.679599617 +0000 UTC m=+228.674381459" watchObservedRunningTime="2026-03-18 08:52:44.685608081 +0000 UTC m=+228.680389843" Mar 18 08:52:44.941926 master-0 kubenswrapper[7620]: I0318 08:52:44.941747 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n"] Mar 18 08:52:44.945004 master-0 kubenswrapper[7620]: I0318 08:52:44.944918 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:44.949581 master-0 kubenswrapper[7620]: I0318 08:52:44.949289 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2w2dp" Mar 18 08:52:44.949581 master-0 kubenswrapper[7620]: I0318 08:52:44.949420 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 08:52:44.960535 master-0 kubenswrapper[7620]: I0318 08:52:44.959401 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n"] Mar 18 08:52:45.001798 master-0 kubenswrapper[7620]: I0318 08:52:45.001714 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.001798 master-0 kubenswrapper[7620]: I0318 08:52:45.001805 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbsfs\" (UniqueName: \"kubernetes.io/projected/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-kube-api-access-hbsfs\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.002177 master-0 kubenswrapper[7620]: I0318 08:52:45.002054 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.103643 master-0 kubenswrapper[7620]: I0318 08:52:45.103570 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbsfs\" (UniqueName: \"kubernetes.io/projected/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-kube-api-access-hbsfs\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.103942 master-0 kubenswrapper[7620]: I0318 08:52:45.103739 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.103942 master-0 kubenswrapper[7620]: I0318 08:52:45.103802 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.105391 master-0 kubenswrapper[7620]: I0318 08:52:45.105323 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.108268 master-0 kubenswrapper[7620]: I0318 08:52:45.108199 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.119610 master-0 kubenswrapper[7620]: I0318 08:52:45.119553 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbsfs\" (UniqueName: \"kubernetes.io/projected/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-kube-api-access-hbsfs\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.281982 master-0 kubenswrapper[7620]: I0318 08:52:45.281908 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 08:52:45.780684 master-0 kubenswrapper[7620]: I0318 08:52:45.780605 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n"] Mar 18 08:52:45.781438 master-0 kubenswrapper[7620]: W0318 08:52:45.781344 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod336e741d_ac9a_4b94_9fbb_c9010e37c2d0.slice/crio-7d31e16adf7f10cb16f9f4afb5a9c559f636c495a15abd8700657562f8afa08b WatchSource:0}: Error finding container 7d31e16adf7f10cb16f9f4afb5a9c559f636c495a15abd8700657562f8afa08b: Status 404 returned error can't find the container with id 7d31e16adf7f10cb16f9f4afb5a9c559f636c495a15abd8700657562f8afa08b Mar 18 08:52:46.109453 master-0 kubenswrapper[7620]: I0318 08:52:46.109373 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl"] Mar 18 08:52:46.110362 master-0 kubenswrapper[7620]: I0318 08:52:46.110324 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" Mar 18 08:52:46.114230 master-0 kubenswrapper[7620]: I0318 08:52:46.114168 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb"] Mar 18 08:52:46.115309 master-0 kubenswrapper[7620]: I0318 08:52:46.115283 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 08:52:46.116565 master-0 kubenswrapper[7620]: I0318 08:52:46.116522 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-8sbgd"] Mar 18 08:52:46.117630 master-0 kubenswrapper[7620]: I0318 08:52:46.117603 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.118316 master-0 kubenswrapper[7620]: I0318 08:52:46.118283 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 08:52:46.120435 master-0 kubenswrapper[7620]: I0318 08:52:46.120408 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 08:52:46.120626 master-0 kubenswrapper[7620]: I0318 08:52:46.120574 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 08:52:46.120925 master-0 kubenswrapper[7620]: I0318 08:52:46.120901 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 08:52:46.121021 master-0 kubenswrapper[7620]: I0318 08:52:46.120983 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 08:52:46.121055 master-0 kubenswrapper[7620]: I0318 08:52:46.121045 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 08:52:46.122390 master-0 kubenswrapper[7620]: I0318 08:52:46.122349 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 08:52:46.134984 master-0 kubenswrapper[7620]: I0318 08:52:46.134886 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb"] Mar 18 08:52:46.136626 master-0 kubenswrapper[7620]: I0318 08:52:46.136550 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl"] Mar 18 08:52:46.219909 master-0 kubenswrapper[7620]: I0318 08:52:46.219725 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsc6v\" (UniqueName: \"kubernetes.io/projected/f650e6f0-fb74-4083-a7a9-fa4df513108f-kube-api-access-tsc6v\") pod \"network-check-source-b4bf74f6-7z5jl\" (UID: \"f650e6f0-fb74-4083-a7a9-fa4df513108f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" Mar 18 08:52:46.219909 master-0 kubenswrapper[7620]: I0318 08:52:46.219798 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-stats-auth\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.220184 master-0 kubenswrapper[7620]: I0318 08:52:46.219956 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/998cabe9-d479-439f-b1c0-1d8c49aefeb9-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wkgdb\" (UID: \"998cabe9-d479-439f-b1c0-1d8c49aefeb9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 08:52:46.220184 master-0 kubenswrapper[7620]: I0318 08:52:46.220111 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-metrics-certs\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.220267 master-0 kubenswrapper[7620]: I0318 08:52:46.220195 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkfql\" (UniqueName: \"kubernetes.io/projected/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-kube-api-access-zkfql\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.220465 master-0 kubenswrapper[7620]: I0318 08:52:46.220301 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.220465 master-0 kubenswrapper[7620]: I0318 08:52:46.220345 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-default-certificate\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.322290 master-0 kubenswrapper[7620]: I0318 08:52:46.322089 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-stats-auth\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.323320 master-0 kubenswrapper[7620]: I0318 08:52:46.322314 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/998cabe9-d479-439f-b1c0-1d8c49aefeb9-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wkgdb\" (UID: \"998cabe9-d479-439f-b1c0-1d8c49aefeb9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 08:52:46.323320 master-0 kubenswrapper[7620]: I0318 08:52:46.322381 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-metrics-certs\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.323320 master-0 kubenswrapper[7620]: I0318 08:52:46.322634 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkfql\" (UniqueName: \"kubernetes.io/projected/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-kube-api-access-zkfql\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.323320 master-0 kubenswrapper[7620]: I0318 08:52:46.322753 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.323320 master-0 kubenswrapper[7620]: I0318 08:52:46.322814 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-default-certificate\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.323320 master-0 kubenswrapper[7620]: I0318 08:52:46.322891 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsc6v\" (UniqueName: \"kubernetes.io/projected/f650e6f0-fb74-4083-a7a9-fa4df513108f-kube-api-access-tsc6v\") pod \"network-check-source-b4bf74f6-7z5jl\" (UID: \"f650e6f0-fb74-4083-a7a9-fa4df513108f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" Mar 18 08:52:46.324497 master-0 kubenswrapper[7620]: I0318 08:52:46.324409 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.329922 master-0 kubenswrapper[7620]: I0318 08:52:46.329844 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-metrics-certs\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.332841 master-0 kubenswrapper[7620]: I0318 08:52:46.332767 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/998cabe9-d479-439f-b1c0-1d8c49aefeb9-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wkgdb\" (UID: \"998cabe9-d479-439f-b1c0-1d8c49aefeb9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 08:52:46.333033 master-0 kubenswrapper[7620]: I0318 08:52:46.333006 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-stats-auth\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.339898 master-0 kubenswrapper[7620]: I0318 08:52:46.339766 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-default-certificate\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.341068 master-0 kubenswrapper[7620]: I0318 08:52:46.341004 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkfql\" (UniqueName: \"kubernetes.io/projected/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-kube-api-access-zkfql\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.349116 master-0 kubenswrapper[7620]: I0318 08:52:46.349066 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsc6v\" (UniqueName: \"kubernetes.io/projected/f650e6f0-fb74-4083-a7a9-fa4df513108f-kube-api-access-tsc6v\") pod \"network-check-source-b4bf74f6-7z5jl\" (UID: \"f650e6f0-fb74-4083-a7a9-fa4df513108f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" Mar 18 08:52:46.453375 master-0 kubenswrapper[7620]: I0318 08:52:46.453291 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" Mar 18 08:52:46.464570 master-0 kubenswrapper[7620]: I0318 08:52:46.464527 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 08:52:46.478627 master-0 kubenswrapper[7620]: I0318 08:52:46.478555 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:46.530911 master-0 kubenswrapper[7620]: W0318 08:52:46.530820 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad4cf9b2_4e66_4921_a30c_7b659bff06ab.slice/crio-477f7fc213175cb954b186d8ae344e645aa5b57eb7978240c62ca1b2bcc281be WatchSource:0}: Error finding container 477f7fc213175cb954b186d8ae344e645aa5b57eb7978240c62ca1b2bcc281be: Status 404 returned error can't find the container with id 477f7fc213175cb954b186d8ae344e645aa5b57eb7978240c62ca1b2bcc281be Mar 18 08:52:46.672708 master-0 kubenswrapper[7620]: I0318 08:52:46.672630 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" event={"ID":"336e741d-ac9a-4b94-9fbb-c9010e37c2d0","Type":"ContainerStarted","Data":"3227c454b5452ea73a798173f5a8c1b8954abd27720de427ec919eaf01ba6c85"} Mar 18 08:52:46.672708 master-0 kubenswrapper[7620]: I0318 08:52:46.672676 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" event={"ID":"336e741d-ac9a-4b94-9fbb-c9010e37c2d0","Type":"ContainerStarted","Data":"5b1bbaad401b48df4f6bc5808614e611fed8580ba701a13ff0f88bf54620f237"} Mar 18 08:52:46.672708 master-0 kubenswrapper[7620]: I0318 08:52:46.672688 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" event={"ID":"336e741d-ac9a-4b94-9fbb-c9010e37c2d0","Type":"ContainerStarted","Data":"7d31e16adf7f10cb16f9f4afb5a9c559f636c495a15abd8700657562f8afa08b"} Mar 18 08:52:46.673581 master-0 kubenswrapper[7620]: I0318 08:52:46.673535 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerStarted","Data":"477f7fc213175cb954b186d8ae344e645aa5b57eb7978240c62ca1b2bcc281be"} Mar 18 08:52:46.890684 master-0 kubenswrapper[7620]: I0318 08:52:46.890523 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" podStartSLOduration=2.890500114 podStartE2EDuration="2.890500114s" podCreationTimestamp="2026-03-18 08:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:52:46.699708452 +0000 UTC m=+230.694490274" watchObservedRunningTime="2026-03-18 08:52:46.890500114 +0000 UTC m=+230.885281866" Mar 18 08:52:46.892626 master-0 kubenswrapper[7620]: I0318 08:52:46.892578 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl"] Mar 18 08:52:46.902081 master-0 kubenswrapper[7620]: W0318 08:52:46.902028 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf650e6f0_fb74_4083_a7a9_fa4df513108f.slice/crio-c6f3ba629d26f9cdeb3d7860a7b0f64e21de0f0dc77a559ebfda83ee3654ece0 WatchSource:0}: Error finding container c6f3ba629d26f9cdeb3d7860a7b0f64e21de0f0dc77a559ebfda83ee3654ece0: Status 404 returned error can't find the container with id c6f3ba629d26f9cdeb3d7860a7b0f64e21de0f0dc77a559ebfda83ee3654ece0 Mar 18 08:52:46.985812 master-0 kubenswrapper[7620]: I0318 08:52:46.985547 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb"] Mar 18 08:52:46.987720 master-0 kubenswrapper[7620]: W0318 08:52:46.987630 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod998cabe9_d479_439f_b1c0_1d8c49aefeb9.slice/crio-3c4e15b0e2e376b6219a5a7e0e6e767c17e2686b088653fbb672e0c430635638 WatchSource:0}: Error finding container 3c4e15b0e2e376b6219a5a7e0e6e767c17e2686b088653fbb672e0c430635638: Status 404 returned error can't find the container with id 3c4e15b0e2e376b6219a5a7e0e6e767c17e2686b088653fbb672e0c430635638 Mar 18 08:52:46.992601 master-0 kubenswrapper[7620]: I0318 08:52:46.992516 7620 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:52:47.680740 master-0 kubenswrapper[7620]: I0318 08:52:47.680692 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" event={"ID":"998cabe9-d479-439f-b1c0-1d8c49aefeb9","Type":"ContainerStarted","Data":"3c4e15b0e2e376b6219a5a7e0e6e767c17e2686b088653fbb672e0c430635638"} Mar 18 08:52:47.684098 master-0 kubenswrapper[7620]: I0318 08:52:47.684023 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" event={"ID":"f650e6f0-fb74-4083-a7a9-fa4df513108f","Type":"ContainerStarted","Data":"a78b9ada82703e8acf6aee15841e597108dfb6a0936df1f5a87d7189ffc5cbcf"} Mar 18 08:52:47.684098 master-0 kubenswrapper[7620]: I0318 08:52:47.684096 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" event={"ID":"f650e6f0-fb74-4083-a7a9-fa4df513108f","Type":"ContainerStarted","Data":"c6f3ba629d26f9cdeb3d7860a7b0f64e21de0f0dc77a559ebfda83ee3654ece0"} Mar 18 08:52:47.706004 master-0 kubenswrapper[7620]: I0318 08:52:47.705918 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" podStartSLOduration=284.705896362 podStartE2EDuration="4m44.705896362s" podCreationTimestamp="2026-03-18 08:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:52:47.703783341 +0000 UTC m=+231.698565093" watchObservedRunningTime="2026-03-18 08:52:47.705896362 +0000 UTC m=+231.700678104" Mar 18 08:52:49.414842 master-0 kubenswrapper[7620]: I0318 08:52:49.414747 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2jsz9"] Mar 18 08:52:49.417057 master-0 kubenswrapper[7620]: I0318 08:52:49.416991 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.419428 master-0 kubenswrapper[7620]: I0318 08:52:49.419379 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 08:52:49.420015 master-0 kubenswrapper[7620]: I0318 08:52:49.419985 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-jzd99" Mar 18 08:52:49.420225 master-0 kubenswrapper[7620]: I0318 08:52:49.420174 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 08:52:49.493845 master-0 kubenswrapper[7620]: I0318 08:52:49.493760 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.493845 master-0 kubenswrapper[7620]: I0318 08:52:49.493828 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs9m\" (UniqueName: \"kubernetes.io/projected/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-kube-api-access-rgs9m\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.494259 master-0 kubenswrapper[7620]: I0318 08:52:49.494079 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.595506 master-0 kubenswrapper[7620]: I0318 08:52:49.595427 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.595506 master-0 kubenswrapper[7620]: I0318 08:52:49.595488 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgs9m\" (UniqueName: \"kubernetes.io/projected/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-kube-api-access-rgs9m\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.595919 master-0 kubenswrapper[7620]: I0318 08:52:49.595708 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.599413 master-0 kubenswrapper[7620]: I0318 08:52:49.599375 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.600115 master-0 kubenswrapper[7620]: I0318 08:52:49.600049 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.614954 master-0 kubenswrapper[7620]: I0318 08:52:49.614905 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgs9m\" (UniqueName: \"kubernetes.io/projected/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-kube-api-access-rgs9m\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.703345 master-0 kubenswrapper[7620]: I0318 08:52:49.703254 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerStarted","Data":"aebf5a50f9283c726e790a6d4456896088c910f33d1ce0e919e46d41b14e21ad"} Mar 18 08:52:49.705890 master-0 kubenswrapper[7620]: I0318 08:52:49.705838 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" event={"ID":"998cabe9-d479-439f-b1c0-1d8c49aefeb9","Type":"ContainerStarted","Data":"ac1f6d8b7ac312137c0fe39d3e9fb8b54fa495f47b321ccd4fdfd1f076d18485"} Mar 18 08:52:49.706258 master-0 kubenswrapper[7620]: I0318 08:52:49.706213 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 08:52:49.713569 master-0 kubenswrapper[7620]: I0318 08:52:49.713459 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 08:52:49.735210 master-0 kubenswrapper[7620]: I0318 08:52:49.735086 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podStartSLOduration=179.037060003 podStartE2EDuration="3m1.73505187s" podCreationTimestamp="2026-03-18 08:49:48 +0000 UTC" firstStartedPulling="2026-03-18 08:52:46.53303816 +0000 UTC m=+230.527819952" lastFinishedPulling="2026-03-18 08:52:49.231030057 +0000 UTC m=+233.225811819" observedRunningTime="2026-03-18 08:52:49.726515692 +0000 UTC m=+233.721297454" watchObservedRunningTime="2026-03-18 08:52:49.73505187 +0000 UTC m=+233.729833662" Mar 18 08:52:49.736141 master-0 kubenswrapper[7620]: I0318 08:52:49.736093 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 08:52:49.762627 master-0 kubenswrapper[7620]: I0318 08:52:49.762490 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" podStartSLOduration=180.530310541 podStartE2EDuration="3m2.762447815s" podCreationTimestamp="2026-03-18 08:49:47 +0000 UTC" firstStartedPulling="2026-03-18 08:52:46.992204389 +0000 UTC m=+230.986986141" lastFinishedPulling="2026-03-18 08:52:49.224341663 +0000 UTC m=+233.219123415" observedRunningTime="2026-03-18 08:52:49.751359333 +0000 UTC m=+233.746141105" watchObservedRunningTime="2026-03-18 08:52:49.762447815 +0000 UTC m=+233.757229637" Mar 18 08:52:50.479504 master-0 kubenswrapper[7620]: I0318 08:52:50.479424 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:50.483213 master-0 kubenswrapper[7620]: I0318 08:52:50.483160 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:50.483213 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:50.483213 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:50.483213 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:50.483213 master-0 kubenswrapper[7620]: I0318 08:52:50.483214 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:50.706624 master-0 kubenswrapper[7620]: I0318 08:52:50.706545 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq"] Mar 18 08:52:50.707787 master-0 kubenswrapper[7620]: I0318 08:52:50.707736 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.709614 master-0 kubenswrapper[7620]: I0318 08:52:50.709557 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 08:52:50.710112 master-0 kubenswrapper[7620]: I0318 08:52:50.710052 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 08:52:50.712041 master-0 kubenswrapper[7620]: I0318 08:52:50.711990 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 08:52:50.712622 master-0 kubenswrapper[7620]: I0318 08:52:50.712580 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kmxfz" Mar 18 08:52:50.715109 master-0 kubenswrapper[7620]: I0318 08:52:50.715025 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2jsz9" event={"ID":"3e96b35f-c57a-4e01-82f7-894ea16ac5b8","Type":"ContainerStarted","Data":"52148427b5f0953e28de4f5c5b1df2635f75a24a39ebc424cf86be47ab6c0b60"} Mar 18 08:52:50.715241 master-0 kubenswrapper[7620]: I0318 08:52:50.715127 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2jsz9" event={"ID":"3e96b35f-c57a-4e01-82f7-894ea16ac5b8","Type":"ContainerStarted","Data":"cab7f3dd54d1235751e5892dcbba68fcd420bde6fbdec0b1e4ae52ac6f473f51"} Mar 18 08:52:50.765512 master-0 kubenswrapper[7620]: I0318 08:52:50.765383 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2jsz9" podStartSLOduration=1.765359761 podStartE2EDuration="1.765359761s" podCreationTimestamp="2026-03-18 08:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:52:50.763562839 +0000 UTC m=+234.758344611" watchObservedRunningTime="2026-03-18 08:52:50.765359761 +0000 UTC m=+234.760141513" Mar 18 08:52:50.772328 master-0 kubenswrapper[7620]: I0318 08:52:50.772276 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq"] Mar 18 08:52:50.810318 master-0 kubenswrapper[7620]: I0318 08:52:50.810262 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5q4t\" (UniqueName: \"kubernetes.io/projected/d71aa1b9-6eb5-4331-b959-8930e10817b4-kube-api-access-x5q4t\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.810633 master-0 kubenswrapper[7620]: I0318 08:52:50.810573 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.810965 master-0 kubenswrapper[7620]: I0318 08:52:50.810936 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.811627 master-0 kubenswrapper[7620]: I0318 08:52:50.811570 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.914023 master-0 kubenswrapper[7620]: I0318 08:52:50.913915 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.914438 master-0 kubenswrapper[7620]: I0318 08:52:50.914291 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5q4t\" (UniqueName: \"kubernetes.io/projected/d71aa1b9-6eb5-4331-b959-8930e10817b4-kube-api-access-x5q4t\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.915031 master-0 kubenswrapper[7620]: I0318 08:52:50.914990 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.915809 master-0 kubenswrapper[7620]: I0318 08:52:50.915774 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.917652 master-0 kubenswrapper[7620]: I0318 08:52:50.917402 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.920206 master-0 kubenswrapper[7620]: I0318 08:52:50.920158 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.926797 master-0 kubenswrapper[7620]: I0318 08:52:50.926734 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:50.933225 master-0 kubenswrapper[7620]: I0318 08:52:50.933174 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5q4t\" (UniqueName: \"kubernetes.io/projected/d71aa1b9-6eb5-4331-b959-8930e10817b4-kube-api-access-x5q4t\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:51.032306 master-0 kubenswrapper[7620]: I0318 08:52:51.032112 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 08:52:51.405037 master-0 kubenswrapper[7620]: I0318 08:52:51.404438 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq"] Mar 18 08:52:51.482174 master-0 kubenswrapper[7620]: I0318 08:52:51.481982 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:51.482174 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:51.482174 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:51.482174 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:51.482174 master-0 kubenswrapper[7620]: I0318 08:52:51.482078 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:51.724130 master-0 kubenswrapper[7620]: I0318 08:52:51.724025 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" event={"ID":"d71aa1b9-6eb5-4331-b959-8930e10817b4","Type":"ContainerStarted","Data":"ea77244427e21f197396c97f841977fffdf6891b18e6c927b783ae59d8efff47"} Mar 18 08:52:52.482333 master-0 kubenswrapper[7620]: I0318 08:52:52.482248 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:52.482333 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:52.482333 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:52.482333 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:52.482923 master-0 kubenswrapper[7620]: I0318 08:52:52.482373 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:53.481039 master-0 kubenswrapper[7620]: I0318 08:52:53.480906 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:53.481039 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:53.481039 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:53.481039 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:53.481039 master-0 kubenswrapper[7620]: I0318 08:52:53.481004 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:53.741497 master-0 kubenswrapper[7620]: I0318 08:52:53.741451 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" event={"ID":"d71aa1b9-6eb5-4331-b959-8930e10817b4","Type":"ContainerStarted","Data":"d9ef691b919d435ed79f78e88292da25483ca26a7d181e09ed541bd058f8c325"} Mar 18 08:52:53.742096 master-0 kubenswrapper[7620]: I0318 08:52:53.742066 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" event={"ID":"d71aa1b9-6eb5-4331-b959-8930e10817b4","Type":"ContainerStarted","Data":"9eff70f0601b07ced0b9cb8d2b3d730e8341f8d467457c6ee66d3943bf79cb7f"} Mar 18 08:52:53.762089 master-0 kubenswrapper[7620]: I0318 08:52:53.761848 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" podStartSLOduration=2.260959599 podStartE2EDuration="3.761814399s" podCreationTimestamp="2026-03-18 08:52:50 +0000 UTC" firstStartedPulling="2026-03-18 08:52:51.423375077 +0000 UTC m=+235.418156829" lastFinishedPulling="2026-03-18 08:52:52.924229867 +0000 UTC m=+236.919011629" observedRunningTime="2026-03-18 08:52:53.759943055 +0000 UTC m=+237.754724827" watchObservedRunningTime="2026-03-18 08:52:53.761814399 +0000 UTC m=+237.756596151" Mar 18 08:52:54.482345 master-0 kubenswrapper[7620]: I0318 08:52:54.482261 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:54.482345 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:54.482345 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:54.482345 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:54.482949 master-0 kubenswrapper[7620]: I0318 08:52:54.482899 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:55.481216 master-0 kubenswrapper[7620]: I0318 08:52:55.481091 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:55.481216 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:55.481216 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:55.481216 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:55.481927 master-0 kubenswrapper[7620]: I0318 08:52:55.481244 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:56.094317 master-0 kubenswrapper[7620]: I0318 08:52:56.094245 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-75szk"] Mar 18 08:52:56.095682 master-0 kubenswrapper[7620]: I0318 08:52:56.095654 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.098588 master-0 kubenswrapper[7620]: I0318 08:52:56.098546 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 08:52:56.098786 master-0 kubenswrapper[7620]: I0318 08:52:56.098767 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-2wdmv" Mar 18 08:52:56.098967 master-0 kubenswrapper[7620]: I0318 08:52:56.098949 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 08:52:56.127020 master-0 kubenswrapper[7620]: I0318 08:52:56.126962 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f"] Mar 18 08:52:56.128391 master-0 kubenswrapper[7620]: I0318 08:52:56.128366 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.133400 master-0 kubenswrapper[7620]: I0318 08:52:56.133356 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vc9fv" Mar 18 08:52:56.133481 master-0 kubenswrapper[7620]: I0318 08:52:56.133364 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 08:52:56.133567 master-0 kubenswrapper[7620]: I0318 08:52:56.133356 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 08:52:56.151551 master-0 kubenswrapper[7620]: I0318 08:52:56.151494 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f"] Mar 18 08:52:56.177716 master-0 kubenswrapper[7620]: I0318 08:52:56.155491 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-dblgh"] Mar 18 08:52:56.177716 master-0 kubenswrapper[7620]: I0318 08:52:56.156951 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.177716 master-0 kubenswrapper[7620]: I0318 08:52:56.159671 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 08:52:56.177716 master-0 kubenswrapper[7620]: I0318 08:52:56.160012 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9s5l6" Mar 18 08:52:56.177716 master-0 kubenswrapper[7620]: I0318 08:52:56.160169 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 08:52:56.177716 master-0 kubenswrapper[7620]: I0318 08:52:56.161700 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 08:52:56.177716 master-0 kubenswrapper[7620]: I0318 08:52:56.174721 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-dblgh"] Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215684 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215766 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215804 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215828 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxfc\" (UniqueName: \"kubernetes.io/projected/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-api-access-rpxfc\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215874 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215898 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215938 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215962 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/91a6fa86-8c58-43bc-a2d4-2b20901269f7-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.215994 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216029 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-root\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216057 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-textfile\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216091 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-wtmp\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216119 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216149 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216176 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7hx\" (UniqueName: \"kubernetes.io/projected/4146a62d-e37b-4295-90ca-b23f5e3d1112-kube-api-access-4r7hx\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216202 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltlf6\" (UniqueName: \"kubernetes.io/projected/06cbd48a-1f1d-4734-8d57-e1b6824879b6-kube-api-access-ltlf6\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216226 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-sys\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.221877 master-0 kubenswrapper[7620]: I0318 08:52:56.216266 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.317874 master-0 kubenswrapper[7620]: I0318 08:52:56.317418 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.317874 master-0 kubenswrapper[7620]: I0318 08:52:56.317494 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-root\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.317874 master-0 kubenswrapper[7620]: I0318 08:52:56.317735 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-textfile\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.317874 master-0 kubenswrapper[7620]: I0318 08:52:56.317817 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-wtmp\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.317874 master-0 kubenswrapper[7620]: I0318 08:52:56.317842 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.318209 master-0 kubenswrapper[7620]: I0318 08:52:56.317910 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.318209 master-0 kubenswrapper[7620]: I0318 08:52:56.317943 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltlf6\" (UniqueName: \"kubernetes.io/projected/06cbd48a-1f1d-4734-8d57-e1b6824879b6-kube-api-access-ltlf6\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.318209 master-0 kubenswrapper[7620]: I0318 08:52:56.317965 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r7hx\" (UniqueName: \"kubernetes.io/projected/4146a62d-e37b-4295-90ca-b23f5e3d1112-kube-api-access-4r7hx\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.318209 master-0 kubenswrapper[7620]: I0318 08:52:56.317996 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-sys\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.318209 master-0 kubenswrapper[7620]: I0318 08:52:56.318083 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.318209 master-0 kubenswrapper[7620]: I0318 08:52:56.318145 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.318922 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-sys\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319033 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319100 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319139 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319162 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxfc\" (UniqueName: \"kubernetes.io/projected/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-api-access-rpxfc\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319205 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319240 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319279 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319309 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/91a6fa86-8c58-43bc-a2d4-2b20901269f7-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319481 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-root\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319916 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-wtmp\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.319998 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-textfile\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.320532 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.320545 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/91a6fa86-8c58-43bc-a2d4-2b20901269f7-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.321869 master-0 kubenswrapper[7620]: I0318 08:52:56.321276 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.325879 master-0 kubenswrapper[7620]: I0318 08:52:56.325557 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 08:52:56.325879 master-0 kubenswrapper[7620]: I0318 08:52:56.325790 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 08:52:56.325997 master-0 kubenswrapper[7620]: I0318 08:52:56.325911 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 08:52:56.326035 master-0 kubenswrapper[7620]: I0318 08:52:56.326015 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 08:52:56.327511 master-0 kubenswrapper[7620]: I0318 08:52:56.326140 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 08:52:56.327511 master-0 kubenswrapper[7620]: I0318 08:52:56.326256 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 08:52:56.327511 master-0 kubenswrapper[7620]: I0318 08:52:56.326373 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 08:52:56.330869 master-0 kubenswrapper[7620]: I0318 08:52:56.328875 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.330869 master-0 kubenswrapper[7620]: E0318 08:52:56.329982 7620 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 18 08:52:56.330869 master-0 kubenswrapper[7620]: E0318 08:52:56.330055 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls podName:91a6fa86-8c58-43bc-a2d4-2b20901269f7 nodeName:}" failed. No retries permitted until 2026-03-18 08:52:56.830032916 +0000 UTC m=+240.824814668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-dblgh" (UID: "91a6fa86-8c58-43bc-a2d4-2b20901269f7") : secret "kube-state-metrics-tls" not found Mar 18 08:52:56.334872 master-0 kubenswrapper[7620]: E0318 08:52:56.331245 7620 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 18 08:52:56.334872 master-0 kubenswrapper[7620]: E0318 08:52:56.331314 7620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls podName:06cbd48a-1f1d-4734-8d57-e1b6824879b6 nodeName:}" failed. No retries permitted until 2026-03-18 08:52:56.831295382 +0000 UTC m=+240.826077194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-dsq5f" (UID: "06cbd48a-1f1d-4734-8d57-e1b6824879b6") : secret "openshift-state-metrics-tls" not found Mar 18 08:52:56.334872 master-0 kubenswrapper[7620]: I0318 08:52:56.334627 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.338872 master-0 kubenswrapper[7620]: I0318 08:52:56.335085 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.340349 master-0 kubenswrapper[7620]: I0318 08:52:56.339569 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.344910 master-0 kubenswrapper[7620]: I0318 08:52:56.343966 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxfc\" (UniqueName: \"kubernetes.io/projected/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-api-access-rpxfc\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.349869 master-0 kubenswrapper[7620]: I0318 08:52:56.345105 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r7hx\" (UniqueName: \"kubernetes.io/projected/4146a62d-e37b-4295-90ca-b23f5e3d1112-kube-api-access-4r7hx\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.349869 master-0 kubenswrapper[7620]: I0318 08:52:56.347036 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.349869 master-0 kubenswrapper[7620]: I0318 08:52:56.347830 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltlf6\" (UniqueName: \"kubernetes.io/projected/06cbd48a-1f1d-4734-8d57-e1b6824879b6-kube-api-access-ltlf6\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.421873 master-0 kubenswrapper[7620]: I0318 08:52:56.420199 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-2wdmv" Mar 18 08:52:56.432879 master-0 kubenswrapper[7620]: I0318 08:52:56.427776 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-75szk" Mar 18 08:52:56.482878 master-0 kubenswrapper[7620]: I0318 08:52:56.479466 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:52:56.482878 master-0 kubenswrapper[7620]: I0318 08:52:56.481368 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:56.482878 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:56.482878 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:56.482878 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:56.482878 master-0 kubenswrapper[7620]: I0318 08:52:56.481462 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:56.768123 master-0 kubenswrapper[7620]: I0318 08:52:56.767865 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-75szk" event={"ID":"4146a62d-e37b-4295-90ca-b23f5e3d1112","Type":"ContainerStarted","Data":"a5f412f714f8914221964a888babc262e21046db3f1580b324543c6c04c3fbd9"} Mar 18 08:52:56.929486 master-0 kubenswrapper[7620]: I0318 08:52:56.928691 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.929486 master-0 kubenswrapper[7620]: I0318 08:52:56.929480 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:56.932871 master-0 kubenswrapper[7620]: I0318 08:52:56.932805 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:56.933846 master-0 kubenswrapper[7620]: I0318 08:52:56.933802 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:57.046662 master-0 kubenswrapper[7620]: I0318 08:52:57.046504 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vc9fv" Mar 18 08:52:57.057614 master-0 kubenswrapper[7620]: I0318 08:52:57.057566 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 08:52:57.127961 master-0 kubenswrapper[7620]: I0318 08:52:57.127888 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9s5l6" Mar 18 08:52:57.139050 master-0 kubenswrapper[7620]: I0318 08:52:57.138973 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 08:52:57.455194 master-0 kubenswrapper[7620]: I0318 08:52:57.454609 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-dblgh"] Mar 18 08:52:57.482301 master-0 kubenswrapper[7620]: I0318 08:52:57.482235 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:57.482301 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:57.482301 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:57.482301 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:57.483471 master-0 kubenswrapper[7620]: I0318 08:52:57.482316 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:57.512922 master-0 kubenswrapper[7620]: W0318 08:52:57.510969 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06cbd48a_1f1d_4734_8d57_e1b6824879b6.slice/crio-52447280dead3b5a28af890c9c1936e68858aa0be2da0967ec252697841e8f7d WatchSource:0}: Error finding container 52447280dead3b5a28af890c9c1936e68858aa0be2da0967ec252697841e8f7d: Status 404 returned error can't find the container with id 52447280dead3b5a28af890c9c1936e68858aa0be2da0967ec252697841e8f7d Mar 18 08:52:57.518372 master-0 kubenswrapper[7620]: I0318 08:52:57.518331 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f"] Mar 18 08:52:57.776998 master-0 kubenswrapper[7620]: I0318 08:52:57.776907 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" event={"ID":"91a6fa86-8c58-43bc-a2d4-2b20901269f7","Type":"ContainerStarted","Data":"0abbacca379cb1aa4703d3e53f8d0cf0d9cc8837c199cd99507dcb84dbe142a8"} Mar 18 08:52:57.779766 master-0 kubenswrapper[7620]: I0318 08:52:57.779698 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" event={"ID":"06cbd48a-1f1d-4734-8d57-e1b6824879b6","Type":"ContainerStarted","Data":"ce90ec5ad0330715fcb7a722109d076372758650502c461f39b95a3109a68cd0"} Mar 18 08:52:57.779766 master-0 kubenswrapper[7620]: I0318 08:52:57.779752 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" event={"ID":"06cbd48a-1f1d-4734-8d57-e1b6824879b6","Type":"ContainerStarted","Data":"52447280dead3b5a28af890c9c1936e68858aa0be2da0967ec252697841e8f7d"} Mar 18 08:52:58.481814 master-0 kubenswrapper[7620]: I0318 08:52:58.481753 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:58.481814 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:58.481814 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:58.481814 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:58.482167 master-0 kubenswrapper[7620]: I0318 08:52:58.481864 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:58.789082 master-0 kubenswrapper[7620]: I0318 08:52:58.788953 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" event={"ID":"06cbd48a-1f1d-4734-8d57-e1b6824879b6","Type":"ContainerStarted","Data":"998322972b919ab66cf0853bc96224baff9eb2633c2163cbc78faf9108aa00e1"} Mar 18 08:52:59.489528 master-0 kubenswrapper[7620]: I0318 08:52:59.489401 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:52:59.489528 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:52:59.489528 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:52:59.489528 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:52:59.489528 master-0 kubenswrapper[7620]: I0318 08:52:59.489473 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:52:59.797371 master-0 kubenswrapper[7620]: I0318 08:52:59.797279 7620 generic.go:334] "Generic (PLEG): container finished" podID="4146a62d-e37b-4295-90ca-b23f5e3d1112" containerID="2fc99621e6e4ad392bd150a56b2542828a2fbbced942d108f4ee62997bcb92eb" exitCode=0 Mar 18 08:52:59.797803 master-0 kubenswrapper[7620]: I0318 08:52:59.797734 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-75szk" event={"ID":"4146a62d-e37b-4295-90ca-b23f5e3d1112","Type":"ContainerDied","Data":"2fc99621e6e4ad392bd150a56b2542828a2fbbced942d108f4ee62997bcb92eb"} Mar 18 08:52:59.801028 master-0 kubenswrapper[7620]: I0318 08:52:59.800890 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" event={"ID":"91a6fa86-8c58-43bc-a2d4-2b20901269f7","Type":"ContainerStarted","Data":"3dd3ae9f180b5f0298d590fb11bdbdbc23ea4b6afaf69738f4b793d018c8c78d"} Mar 18 08:52:59.801028 master-0 kubenswrapper[7620]: I0318 08:52:59.800930 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" event={"ID":"91a6fa86-8c58-43bc-a2d4-2b20901269f7","Type":"ContainerStarted","Data":"6bf27372fdb6472e49ec69f86331b98824d4172ba2c770364b4657b9ebbc08e8"} Mar 18 08:52:59.803819 master-0 kubenswrapper[7620]: I0318 08:52:59.803725 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" event={"ID":"06cbd48a-1f1d-4734-8d57-e1b6824879b6","Type":"ContainerStarted","Data":"e4a7f98032fd6c63f227f142b970725c6539e79bcbebb20122012a2c1632c607"} Mar 18 08:52:59.862879 master-0 kubenswrapper[7620]: I0318 08:52:59.857901 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" podStartSLOduration=2.172553412 podStartE2EDuration="3.85786656s" podCreationTimestamp="2026-03-18 08:52:56 +0000 UTC" firstStartedPulling="2026-03-18 08:52:57.854825982 +0000 UTC m=+241.849607734" lastFinishedPulling="2026-03-18 08:52:59.54013913 +0000 UTC m=+243.534920882" observedRunningTime="2026-03-18 08:52:59.851206287 +0000 UTC m=+243.845988049" watchObservedRunningTime="2026-03-18 08:52:59.85786656 +0000 UTC m=+243.852648312" Mar 18 08:53:00.488878 master-0 kubenswrapper[7620]: I0318 08:53:00.488072 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:00.488878 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:00.488878 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:00.488878 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:00.488878 master-0 kubenswrapper[7620]: I0318 08:53:00.488140 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:00.814358 master-0 kubenswrapper[7620]: I0318 08:53:00.814225 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-75szk" event={"ID":"4146a62d-e37b-4295-90ca-b23f5e3d1112","Type":"ContainerStarted","Data":"913064c24a2e59d5383d074f7ecf291fed1133a20f7c8a1f5184c3d99c99392b"} Mar 18 08:53:00.814358 master-0 kubenswrapper[7620]: I0318 08:53:00.814300 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-75szk" event={"ID":"4146a62d-e37b-4295-90ca-b23f5e3d1112","Type":"ContainerStarted","Data":"9e8a056c3740c8ad30864cc339419c2ce00820de31070146240cae537deab398"} Mar 18 08:53:00.817622 master-0 kubenswrapper[7620]: I0318 08:53:00.817554 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" event={"ID":"91a6fa86-8c58-43bc-a2d4-2b20901269f7","Type":"ContainerStarted","Data":"ae00b45c309c939ac8ce9ff2e7260f49c8d66bda2e71731b02c8e50d435dc5b3"} Mar 18 08:53:00.841571 master-0 kubenswrapper[7620]: I0318 08:53:00.841490 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-75szk" podStartSLOduration=2.496670577 podStartE2EDuration="4.841472175s" podCreationTimestamp="2026-03-18 08:52:56 +0000 UTC" firstStartedPulling="2026-03-18 08:52:56.46201261 +0000 UTC m=+240.456794362" lastFinishedPulling="2026-03-18 08:52:58.806814198 +0000 UTC m=+242.801595960" observedRunningTime="2026-03-18 08:53:00.837136989 +0000 UTC m=+244.831918751" watchObservedRunningTime="2026-03-18 08:53:00.841472175 +0000 UTC m=+244.836253927" Mar 18 08:53:00.866345 master-0 kubenswrapper[7620]: I0318 08:53:00.866256 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" podStartSLOduration=3.467551101 podStartE2EDuration="4.866234244s" podCreationTimestamp="2026-03-18 08:52:56 +0000 UTC" firstStartedPulling="2026-03-18 08:52:57.464023749 +0000 UTC m=+241.458805501" lastFinishedPulling="2026-03-18 08:52:58.862706882 +0000 UTC m=+242.857488644" observedRunningTime="2026-03-18 08:53:00.865829662 +0000 UTC m=+244.860611414" watchObservedRunningTime="2026-03-18 08:53:00.866234244 +0000 UTC m=+244.861015996" Mar 18 08:53:01.483531 master-0 kubenswrapper[7620]: I0318 08:53:01.483457 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:01.483531 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:01.483531 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:01.483531 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:01.483937 master-0 kubenswrapper[7620]: I0318 08:53:01.483545 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:01.635903 master-0 kubenswrapper[7620]: I0318 08:53:01.635834 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-59f88c66c8-z4c2f"] Mar 18 08:53:01.636574 master-0 kubenswrapper[7620]: I0318 08:53:01.636547 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.639214 master-0 kubenswrapper[7620]: I0318 08:53:01.639167 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 08:53:01.640048 master-0 kubenswrapper[7620]: I0318 08:53:01.639999 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 08:53:01.640157 master-0 kubenswrapper[7620]: I0318 08:53:01.640098 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 08:53:01.640157 master-0 kubenswrapper[7620]: I0318 08:53:01.640102 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-as91djiheslg2" Mar 18 08:53:01.640264 master-0 kubenswrapper[7620]: I0318 08:53:01.640155 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-gpcfv" Mar 18 08:53:01.640264 master-0 kubenswrapper[7620]: I0318 08:53:01.640249 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 08:53:01.656442 master-0 kubenswrapper[7620]: I0318 08:53:01.656388 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-59f88c66c8-z4c2f"] Mar 18 08:53:01.711142 master-0 kubenswrapper[7620]: I0318 08:53:01.709807 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.711142 master-0 kubenswrapper[7620]: I0318 08:53:01.709879 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.711142 master-0 kubenswrapper[7620]: I0318 08:53:01.709907 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.711142 master-0 kubenswrapper[7620]: I0318 08:53:01.709930 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.711142 master-0 kubenswrapper[7620]: I0318 08:53:01.709966 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8l2\" (UniqueName: \"kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.711142 master-0 kubenswrapper[7620]: I0318 08:53:01.709991 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.711142 master-0 kubenswrapper[7620]: I0318 08:53:01.710033 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.811300 master-0 kubenswrapper[7620]: I0318 08:53:01.811169 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.811300 master-0 kubenswrapper[7620]: I0318 08:53:01.811256 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8l2\" (UniqueName: \"kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.811526 master-0 kubenswrapper[7620]: I0318 08:53:01.811403 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.811744 master-0 kubenswrapper[7620]: I0318 08:53:01.811690 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.811825 master-0 kubenswrapper[7620]: I0318 08:53:01.811787 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.811921 master-0 kubenswrapper[7620]: I0318 08:53:01.811904 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.811977 master-0 kubenswrapper[7620]: I0318 08:53:01.811956 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.812297 master-0 kubenswrapper[7620]: I0318 08:53:01.812261 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.815074 master-0 kubenswrapper[7620]: I0318 08:53:01.813086 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.815074 master-0 kubenswrapper[7620]: I0318 08:53:01.813303 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.815872 master-0 kubenswrapper[7620]: I0318 08:53:01.815814 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.816142 master-0 kubenswrapper[7620]: I0318 08:53:01.816034 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.827538 master-0 kubenswrapper[7620]: I0318 08:53:01.826721 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.835246 master-0 kubenswrapper[7620]: I0318 08:53:01.835206 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8l2\" (UniqueName: \"kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:01.955587 master-0 kubenswrapper[7620]: I0318 08:53:01.955510 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:02.417108 master-0 kubenswrapper[7620]: I0318 08:53:02.416994 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-59f88c66c8-z4c2f"] Mar 18 08:53:02.423401 master-0 kubenswrapper[7620]: W0318 08:53:02.423332 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5320a1da_262a_4b1b_93b4_1df9d4c26eec.slice/crio-08c69ca72893cd876b16b5740d0ac91db39852d0fe47a473761270d55d7436d0 WatchSource:0}: Error finding container 08c69ca72893cd876b16b5740d0ac91db39852d0fe47a473761270d55d7436d0: Status 404 returned error can't find the container with id 08c69ca72893cd876b16b5740d0ac91db39852d0fe47a473761270d55d7436d0 Mar 18 08:53:02.481744 master-0 kubenswrapper[7620]: I0318 08:53:02.481686 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:02.481744 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:02.481744 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:02.481744 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:02.482023 master-0 kubenswrapper[7620]: I0318 08:53:02.481757 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:02.833461 master-0 kubenswrapper[7620]: I0318 08:53:02.833358 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" event={"ID":"5320a1da-262a-4b1b-93b4-1df9d4c26eec","Type":"ContainerStarted","Data":"08c69ca72893cd876b16b5740d0ac91db39852d0fe47a473761270d55d7436d0"} Mar 18 08:53:03.481056 master-0 kubenswrapper[7620]: I0318 08:53:03.480990 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:03.481056 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:03.481056 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:03.481056 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:03.481454 master-0 kubenswrapper[7620]: I0318 08:53:03.481115 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:04.481270 master-0 kubenswrapper[7620]: I0318 08:53:04.481206 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:04.481270 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:04.481270 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:04.481270 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:04.481828 master-0 kubenswrapper[7620]: I0318 08:53:04.481285 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:04.849695 master-0 kubenswrapper[7620]: I0318 08:53:04.849601 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" event={"ID":"5320a1da-262a-4b1b-93b4-1df9d4c26eec","Type":"ContainerStarted","Data":"8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319"} Mar 18 08:53:04.885426 master-0 kubenswrapper[7620]: I0318 08:53:04.884710 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" podStartSLOduration=1.756128567 podStartE2EDuration="3.884679802s" podCreationTimestamp="2026-03-18 08:53:01 +0000 UTC" firstStartedPulling="2026-03-18 08:53:02.426223363 +0000 UTC m=+246.421005115" lastFinishedPulling="2026-03-18 08:53:04.554774598 +0000 UTC m=+248.549556350" observedRunningTime="2026-03-18 08:53:04.875830515 +0000 UTC m=+248.870612347" watchObservedRunningTime="2026-03-18 08:53:04.884679802 +0000 UTC m=+248.879461594" Mar 18 08:53:05.482704 master-0 kubenswrapper[7620]: I0318 08:53:05.482601 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:05.482704 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:05.482704 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:05.482704 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:05.483804 master-0 kubenswrapper[7620]: I0318 08:53:05.482741 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:06.482893 master-0 kubenswrapper[7620]: I0318 08:53:06.481798 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:06.482893 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:06.482893 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:06.482893 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:06.482893 master-0 kubenswrapper[7620]: I0318 08:53:06.481961 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:07.482186 master-0 kubenswrapper[7620]: I0318 08:53:07.482083 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:07.482186 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:07.482186 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:07.482186 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:07.482522 master-0 kubenswrapper[7620]: I0318 08:53:07.482197 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:08.480809 master-0 kubenswrapper[7620]: I0318 08:53:08.480761 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:08.480809 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:08.480809 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:08.480809 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:08.481404 master-0 kubenswrapper[7620]: I0318 08:53:08.480839 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:09.482322 master-0 kubenswrapper[7620]: I0318 08:53:09.482161 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:09.482322 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:09.482322 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:09.482322 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:09.482322 master-0 kubenswrapper[7620]: I0318 08:53:09.482249 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:10.483022 master-0 kubenswrapper[7620]: I0318 08:53:10.482898 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:10.483022 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:10.483022 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:10.483022 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:10.483022 master-0 kubenswrapper[7620]: I0318 08:53:10.482998 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:11.481064 master-0 kubenswrapper[7620]: I0318 08:53:11.481008 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:11.481064 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:11.481064 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:11.481064 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:11.481374 master-0 kubenswrapper[7620]: I0318 08:53:11.481086 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:12.482939 master-0 kubenswrapper[7620]: I0318 08:53:12.482808 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:12.482939 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:12.482939 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:12.482939 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:12.483563 master-0 kubenswrapper[7620]: I0318 08:53:12.482968 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:13.482477 master-0 kubenswrapper[7620]: I0318 08:53:13.482418 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:13.482477 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:13.482477 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:13.482477 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:13.482477 master-0 kubenswrapper[7620]: I0318 08:53:13.482475 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:14.484137 master-0 kubenswrapper[7620]: I0318 08:53:14.484021 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:14.484137 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:14.484137 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:14.484137 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:14.485123 master-0 kubenswrapper[7620]: I0318 08:53:14.484168 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:15.481882 master-0 kubenswrapper[7620]: I0318 08:53:15.481802 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:15.481882 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:15.481882 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:15.481882 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:15.482263 master-0 kubenswrapper[7620]: I0318 08:53:15.481952 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:16.482185 master-0 kubenswrapper[7620]: I0318 08:53:16.482100 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:16.482185 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:16.482185 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:16.482185 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:16.482185 master-0 kubenswrapper[7620]: I0318 08:53:16.482179 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:17.482283 master-0 kubenswrapper[7620]: I0318 08:53:17.482172 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:17.482283 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:17.482283 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:17.482283 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:17.483488 master-0 kubenswrapper[7620]: I0318 08:53:17.482295 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:18.482092 master-0 kubenswrapper[7620]: I0318 08:53:18.482024 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:18.482092 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:18.482092 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:18.482092 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:18.482754 master-0 kubenswrapper[7620]: I0318 08:53:18.482134 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:19.482383 master-0 kubenswrapper[7620]: I0318 08:53:19.482300 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:19.482383 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:19.482383 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:19.482383 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:19.483416 master-0 kubenswrapper[7620]: I0318 08:53:19.482414 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:20.481556 master-0 kubenswrapper[7620]: I0318 08:53:20.481477 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:20.481556 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:20.481556 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:20.481556 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:20.482071 master-0 kubenswrapper[7620]: I0318 08:53:20.481584 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:21.481147 master-0 kubenswrapper[7620]: I0318 08:53:21.481074 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:21.481147 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:21.481147 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:21.481147 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:21.481878 master-0 kubenswrapper[7620]: I0318 08:53:21.481151 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:21.955726 master-0 kubenswrapper[7620]: I0318 08:53:21.955646 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:21.956049 master-0 kubenswrapper[7620]: I0318 08:53:21.955750 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:22.482030 master-0 kubenswrapper[7620]: I0318 08:53:22.481913 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:22.482030 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:22.482030 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:22.482030 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:22.482030 master-0 kubenswrapper[7620]: I0318 08:53:22.481993 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:23.481513 master-0 kubenswrapper[7620]: I0318 08:53:23.481396 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:23.481513 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:23.481513 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:23.481513 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:23.481513 master-0 kubenswrapper[7620]: I0318 08:53:23.481505 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:24.483324 master-0 kubenswrapper[7620]: I0318 08:53:24.483185 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:24.483324 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:24.483324 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:24.483324 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:24.484435 master-0 kubenswrapper[7620]: I0318 08:53:24.483325 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:25.482115 master-0 kubenswrapper[7620]: I0318 08:53:25.481949 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:25.482115 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:25.482115 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:25.482115 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:25.482115 master-0 kubenswrapper[7620]: I0318 08:53:25.482105 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:26.481700 master-0 kubenswrapper[7620]: I0318 08:53:26.481577 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:26.481700 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:26.481700 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:26.481700 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:26.481700 master-0 kubenswrapper[7620]: I0318 08:53:26.481688 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:27.483158 master-0 kubenswrapper[7620]: I0318 08:53:27.483045 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:27.483158 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:27.483158 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:27.483158 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:27.484185 master-0 kubenswrapper[7620]: I0318 08:53:27.483180 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:28.482580 master-0 kubenswrapper[7620]: I0318 08:53:28.482509 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:28.482580 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:28.482580 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:28.482580 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:28.483969 master-0 kubenswrapper[7620]: I0318 08:53:28.482595 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:29.483541 master-0 kubenswrapper[7620]: I0318 08:53:29.483398 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:29.483541 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:29.483541 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:29.483541 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:29.484593 master-0 kubenswrapper[7620]: I0318 08:53:29.483590 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:30.483064 master-0 kubenswrapper[7620]: I0318 08:53:30.482935 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:30.483064 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:30.483064 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:30.483064 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:30.483636 master-0 kubenswrapper[7620]: I0318 08:53:30.483061 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:31.482192 master-0 kubenswrapper[7620]: I0318 08:53:31.482082 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:31.482192 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:31.482192 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:31.482192 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:31.482192 master-0 kubenswrapper[7620]: I0318 08:53:31.482187 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:32.482557 master-0 kubenswrapper[7620]: I0318 08:53:32.482485 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:32.482557 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:32.482557 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:32.482557 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:32.483530 master-0 kubenswrapper[7620]: I0318 08:53:32.482584 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:33.481615 master-0 kubenswrapper[7620]: I0318 08:53:33.481523 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:33.481615 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:33.481615 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:33.481615 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:33.481615 master-0 kubenswrapper[7620]: I0318 08:53:33.481591 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:34.482551 master-0 kubenswrapper[7620]: I0318 08:53:34.482464 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:34.482551 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:34.482551 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:34.482551 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:34.483534 master-0 kubenswrapper[7620]: I0318 08:53:34.482614 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:35.482936 master-0 kubenswrapper[7620]: I0318 08:53:35.482874 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:35.482936 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:35.482936 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:35.482936 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:35.484053 master-0 kubenswrapper[7620]: I0318 08:53:35.483994 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:36.483028 master-0 kubenswrapper[7620]: I0318 08:53:36.482910 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:36.483028 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:36.483028 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:36.483028 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:36.484340 master-0 kubenswrapper[7620]: I0318 08:53:36.483047 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:37.483184 master-0 kubenswrapper[7620]: I0318 08:53:37.483062 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:37.483184 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:37.483184 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:37.483184 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:37.484269 master-0 kubenswrapper[7620]: I0318 08:53:37.483184 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:38.482097 master-0 kubenswrapper[7620]: I0318 08:53:38.481989 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:38.482097 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:38.482097 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:38.482097 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:38.482097 master-0 kubenswrapper[7620]: I0318 08:53:38.482077 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:39.482506 master-0 kubenswrapper[7620]: I0318 08:53:39.482388 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:39.482506 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:39.482506 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:39.482506 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:39.484252 master-0 kubenswrapper[7620]: I0318 08:53:39.482499 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:40.482643 master-0 kubenswrapper[7620]: I0318 08:53:40.482530 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:40.482643 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:40.482643 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:40.482643 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:40.482643 master-0 kubenswrapper[7620]: I0318 08:53:40.482637 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:41.482092 master-0 kubenswrapper[7620]: I0318 08:53:41.481957 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:41.482092 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:41.482092 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:41.482092 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:41.482092 master-0 kubenswrapper[7620]: I0318 08:53:41.482084 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:41.966872 master-0 kubenswrapper[7620]: I0318 08:53:41.966767 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:41.974624 master-0 kubenswrapper[7620]: I0318 08:53:41.974563 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 08:53:42.482539 master-0 kubenswrapper[7620]: I0318 08:53:42.482472 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:42.482539 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:42.482539 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:42.482539 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:42.482924 master-0 kubenswrapper[7620]: I0318 08:53:42.482559 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:43.482902 master-0 kubenswrapper[7620]: I0318 08:53:43.482787 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:43.482902 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:43.482902 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:43.482902 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:43.483967 master-0 kubenswrapper[7620]: I0318 08:53:43.482943 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:44.481075 master-0 kubenswrapper[7620]: I0318 08:53:44.481016 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:44.481075 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:44.481075 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:44.481075 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:44.481351 master-0 kubenswrapper[7620]: I0318 08:53:44.481099 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:45.482526 master-0 kubenswrapper[7620]: I0318 08:53:45.482435 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:45.482526 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:45.482526 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:45.482526 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:45.483413 master-0 kubenswrapper[7620]: I0318 08:53:45.482548 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:46.481822 master-0 kubenswrapper[7620]: I0318 08:53:46.481567 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:46.481822 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:46.481822 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:46.481822 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:46.481822 master-0 kubenswrapper[7620]: I0318 08:53:46.481637 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:47.481178 master-0 kubenswrapper[7620]: I0318 08:53:47.481124 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:47.481178 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:47.481178 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:47.481178 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:47.481724 master-0 kubenswrapper[7620]: I0318 08:53:47.481204 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:48.483541 master-0 kubenswrapper[7620]: I0318 08:53:48.483422 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:48.483541 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:48.483541 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:48.483541 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:48.484470 master-0 kubenswrapper[7620]: I0318 08:53:48.483702 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:49.481755 master-0 kubenswrapper[7620]: I0318 08:53:49.481660 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:49.481755 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:49.481755 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:49.481755 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:49.482531 master-0 kubenswrapper[7620]: I0318 08:53:49.481796 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:50.480589 master-0 kubenswrapper[7620]: I0318 08:53:50.480531 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:50.480589 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:50.480589 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:50.480589 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:50.481208 master-0 kubenswrapper[7620]: I0318 08:53:50.480623 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:51.482983 master-0 kubenswrapper[7620]: I0318 08:53:51.482870 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:51.482983 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:51.482983 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:51.482983 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:51.484705 master-0 kubenswrapper[7620]: I0318 08:53:51.483012 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:52.481968 master-0 kubenswrapper[7620]: I0318 08:53:52.481895 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:52.481968 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:52.481968 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:52.481968 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:52.482376 master-0 kubenswrapper[7620]: I0318 08:53:52.481990 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:53.253995 master-0 kubenswrapper[7620]: I0318 08:53:53.253920 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/1.log" Mar 18 08:53:53.255662 master-0 kubenswrapper[7620]: I0318 08:53:53.255600 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/0.log" Mar 18 08:53:53.255777 master-0 kubenswrapper[7620]: I0318 08:53:53.255688 7620 generic.go:334] "Generic (PLEG): container finished" podID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" containerID="458f7a943f236b1eac07ca69624114d084866d6f79f7c12e67735ee4e517390d" exitCode=1 Mar 18 08:53:53.255777 master-0 kubenswrapper[7620]: I0318 08:53:53.255739 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerDied","Data":"458f7a943f236b1eac07ca69624114d084866d6f79f7c12e67735ee4e517390d"} Mar 18 08:53:53.255962 master-0 kubenswrapper[7620]: I0318 08:53:53.255793 7620 scope.go:117] "RemoveContainer" containerID="e63c5c1d709e6609cc982cf30b568c18af00671995969feb6d602b6e7ea5ee6b" Mar 18 08:53:53.256843 master-0 kubenswrapper[7620]: I0318 08:53:53.256772 7620 scope.go:117] "RemoveContainer" containerID="458f7a943f236b1eac07ca69624114d084866d6f79f7c12e67735ee4e517390d" Mar 18 08:53:53.257838 master-0 kubenswrapper[7620]: E0318 08:53:53.257286 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 08:53:53.484042 master-0 kubenswrapper[7620]: I0318 08:53:53.483067 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:53.484042 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:53.484042 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:53.484042 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:53.484042 master-0 kubenswrapper[7620]: I0318 08:53:53.483164 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:54.267195 master-0 kubenswrapper[7620]: I0318 08:53:54.267095 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/1.log" Mar 18 08:53:54.482348 master-0 kubenswrapper[7620]: I0318 08:53:54.482264 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:54.482348 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:54.482348 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:54.482348 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:54.482825 master-0 kubenswrapper[7620]: I0318 08:53:54.482365 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:55.484467 master-0 kubenswrapper[7620]: I0318 08:53:55.484352 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:55.484467 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:55.484467 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:55.484467 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:55.485947 master-0 kubenswrapper[7620]: I0318 08:53:55.484480 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:56.481794 master-0 kubenswrapper[7620]: I0318 08:53:56.481708 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:56.481794 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:56.481794 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:56.481794 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:56.482446 master-0 kubenswrapper[7620]: I0318 08:53:56.481804 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:57.094034 master-0 kubenswrapper[7620]: I0318 08:53:57.093922 7620 scope.go:117] "RemoveContainer" containerID="51dc55afbcfce4c386c5bd0bc1deafcfc0ec711be4ef96fdaaef56b5f72c67a2" Mar 18 08:53:57.482502 master-0 kubenswrapper[7620]: I0318 08:53:57.482346 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:57.482502 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:57.482502 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:57.482502 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:57.482502 master-0 kubenswrapper[7620]: I0318 08:53:57.482462 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:58.481484 master-0 kubenswrapper[7620]: I0318 08:53:58.481344 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:58.481484 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:58.481484 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:58.481484 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:58.481484 master-0 kubenswrapper[7620]: I0318 08:53:58.481461 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:59.481586 master-0 kubenswrapper[7620]: I0318 08:53:59.481494 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:59.481586 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:53:59.481586 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:53:59.481586 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:53:59.482204 master-0 kubenswrapper[7620]: I0318 08:53:59.481656 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:00.482210 master-0 kubenswrapper[7620]: I0318 08:54:00.482110 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:00.482210 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:00.482210 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:00.482210 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:00.483352 master-0 kubenswrapper[7620]: I0318 08:54:00.482222 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:01.482390 master-0 kubenswrapper[7620]: I0318 08:54:01.482338 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:01.482390 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:01.482390 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:01.482390 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:01.483071 master-0 kubenswrapper[7620]: I0318 08:54:01.483004 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:02.481968 master-0 kubenswrapper[7620]: I0318 08:54:02.481838 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:02.481968 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:02.481968 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:02.481968 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:02.483186 master-0 kubenswrapper[7620]: I0318 08:54:02.481992 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:03.483078 master-0 kubenswrapper[7620]: I0318 08:54:03.483026 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:03.483078 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:03.483078 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:03.483078 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:03.483745 master-0 kubenswrapper[7620]: I0318 08:54:03.483717 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:04.224740 master-0 kubenswrapper[7620]: I0318 08:54:04.224692 7620 scope.go:117] "RemoveContainer" containerID="458f7a943f236b1eac07ca69624114d084866d6f79f7c12e67735ee4e517390d" Mar 18 08:54:04.482545 master-0 kubenswrapper[7620]: I0318 08:54:04.482383 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:04.482545 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:04.482545 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:04.482545 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:04.482545 master-0 kubenswrapper[7620]: I0318 08:54:04.482515 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:05.365477 master-0 kubenswrapper[7620]: I0318 08:54:05.365402 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/1.log" Mar 18 08:54:05.366283 master-0 kubenswrapper[7620]: I0318 08:54:05.365940 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerStarted","Data":"fad64d39172d17151c921b86e24888209413b262345fa2cee0651c733f8df0a1"} Mar 18 08:54:05.481805 master-0 kubenswrapper[7620]: I0318 08:54:05.481716 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:05.481805 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:05.481805 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:05.481805 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:05.481805 master-0 kubenswrapper[7620]: I0318 08:54:05.481802 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:06.481600 master-0 kubenswrapper[7620]: I0318 08:54:06.481479 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:06.481600 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:06.481600 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:06.481600 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:06.482746 master-0 kubenswrapper[7620]: I0318 08:54:06.481589 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:07.481607 master-0 kubenswrapper[7620]: I0318 08:54:07.481522 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:07.481607 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:07.481607 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:07.481607 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:07.481607 master-0 kubenswrapper[7620]: I0318 08:54:07.481600 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:08.481427 master-0 kubenswrapper[7620]: I0318 08:54:08.481317 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:08.481427 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:08.481427 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:08.481427 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:08.481427 master-0 kubenswrapper[7620]: I0318 08:54:08.481424 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:09.482539 master-0 kubenswrapper[7620]: I0318 08:54:09.482432 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:09.482539 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:09.482539 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:09.482539 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:09.483634 master-0 kubenswrapper[7620]: I0318 08:54:09.482544 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:10.483166 master-0 kubenswrapper[7620]: I0318 08:54:10.483052 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:10.483166 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:10.483166 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:10.483166 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:10.484470 master-0 kubenswrapper[7620]: I0318 08:54:10.483174 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:11.482032 master-0 kubenswrapper[7620]: I0318 08:54:11.481918 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:11.482032 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:11.482032 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:11.482032 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:11.482568 master-0 kubenswrapper[7620]: I0318 08:54:11.482261 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:12.482688 master-0 kubenswrapper[7620]: I0318 08:54:12.482582 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:12.482688 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:12.482688 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:12.482688 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:12.483725 master-0 kubenswrapper[7620]: I0318 08:54:12.482706 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:13.482724 master-0 kubenswrapper[7620]: I0318 08:54:13.482610 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:13.482724 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:13.482724 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:13.482724 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:13.483726 master-0 kubenswrapper[7620]: I0318 08:54:13.482725 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:14.481731 master-0 kubenswrapper[7620]: I0318 08:54:14.481654 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:14.481731 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:14.481731 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:14.481731 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:14.482229 master-0 kubenswrapper[7620]: I0318 08:54:14.481736 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:15.482691 master-0 kubenswrapper[7620]: I0318 08:54:15.482601 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:15.482691 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:15.482691 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:15.482691 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:15.483791 master-0 kubenswrapper[7620]: I0318 08:54:15.482702 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:16.482458 master-0 kubenswrapper[7620]: I0318 08:54:16.482132 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:16.482458 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:16.482458 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:16.482458 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:16.483173 master-0 kubenswrapper[7620]: I0318 08:54:16.482504 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:17.482809 master-0 kubenswrapper[7620]: I0318 08:54:17.482703 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:17.482809 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:17.482809 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:17.482809 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:17.483773 master-0 kubenswrapper[7620]: I0318 08:54:17.482842 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:18.481253 master-0 kubenswrapper[7620]: I0318 08:54:18.481155 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:18.481253 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:18.481253 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:18.481253 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:18.481609 master-0 kubenswrapper[7620]: I0318 08:54:18.481323 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:19.480822 master-0 kubenswrapper[7620]: I0318 08:54:19.480757 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:19.480822 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:19.480822 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:19.480822 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:19.481457 master-0 kubenswrapper[7620]: I0318 08:54:19.480825 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:20.481437 master-0 kubenswrapper[7620]: I0318 08:54:20.481342 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:20.481437 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:20.481437 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:20.481437 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:20.481437 master-0 kubenswrapper[7620]: I0318 08:54:20.481406 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:21.486715 master-0 kubenswrapper[7620]: I0318 08:54:21.486637 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:21.486715 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:21.486715 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:21.486715 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:21.487579 master-0 kubenswrapper[7620]: I0318 08:54:21.486730 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:22.481690 master-0 kubenswrapper[7620]: I0318 08:54:22.481584 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:22.481690 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:22.481690 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:22.481690 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:22.482304 master-0 kubenswrapper[7620]: I0318 08:54:22.481690 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:23.482921 master-0 kubenswrapper[7620]: I0318 08:54:23.482830 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:23.482921 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:23.482921 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:23.482921 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:23.483909 master-0 kubenswrapper[7620]: I0318 08:54:23.482944 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:24.482062 master-0 kubenswrapper[7620]: I0318 08:54:24.481955 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:24.482062 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:24.482062 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:24.482062 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:24.482549 master-0 kubenswrapper[7620]: I0318 08:54:24.482083 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:25.482649 master-0 kubenswrapper[7620]: I0318 08:54:25.482535 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:25.482649 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:25.482649 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:25.482649 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:25.482649 master-0 kubenswrapper[7620]: I0318 08:54:25.482637 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:26.482077 master-0 kubenswrapper[7620]: I0318 08:54:26.481963 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:26.482077 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:26.482077 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:26.482077 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:26.482572 master-0 kubenswrapper[7620]: I0318 08:54:26.482084 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:27.483235 master-0 kubenswrapper[7620]: I0318 08:54:27.483129 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:27.483235 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:27.483235 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:27.483235 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:27.484208 master-0 kubenswrapper[7620]: I0318 08:54:27.483240 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:28.482496 master-0 kubenswrapper[7620]: I0318 08:54:28.482375 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:28.482496 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:28.482496 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:28.482496 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:28.482496 master-0 kubenswrapper[7620]: I0318 08:54:28.482485 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:29.482767 master-0 kubenswrapper[7620]: I0318 08:54:29.482640 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:29.482767 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:29.482767 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:29.482767 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:29.482767 master-0 kubenswrapper[7620]: I0318 08:54:29.482754 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:30.482478 master-0 kubenswrapper[7620]: I0318 08:54:30.482316 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:30.482478 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:30.482478 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:30.482478 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:30.482478 master-0 kubenswrapper[7620]: I0318 08:54:30.482437 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:31.482592 master-0 kubenswrapper[7620]: I0318 08:54:31.482474 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:31.482592 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:31.482592 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:31.482592 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:31.483690 master-0 kubenswrapper[7620]: I0318 08:54:31.482593 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:32.483022 master-0 kubenswrapper[7620]: I0318 08:54:32.482947 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:32.483022 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:32.483022 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:32.483022 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:32.484236 master-0 kubenswrapper[7620]: I0318 08:54:32.483069 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:33.482611 master-0 kubenswrapper[7620]: I0318 08:54:33.482538 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:33.482611 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:33.482611 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:33.482611 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:33.484279 master-0 kubenswrapper[7620]: I0318 08:54:33.484213 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:34.481597 master-0 kubenswrapper[7620]: I0318 08:54:34.481516 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:34.481597 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:34.481597 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:34.481597 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:34.481931 master-0 kubenswrapper[7620]: I0318 08:54:34.481633 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:35.482246 master-0 kubenswrapper[7620]: I0318 08:54:35.482144 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:35.482246 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:35.482246 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:35.482246 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:35.483443 master-0 kubenswrapper[7620]: I0318 08:54:35.482267 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:36.482300 master-0 kubenswrapper[7620]: I0318 08:54:36.482123 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:36.482300 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:36.482300 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:36.482300 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:36.484220 master-0 kubenswrapper[7620]: I0318 08:54:36.482311 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:37.483509 master-0 kubenswrapper[7620]: I0318 08:54:37.483344 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:37.483509 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:37.483509 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:37.483509 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:37.484539 master-0 kubenswrapper[7620]: I0318 08:54:37.483537 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:38.483291 master-0 kubenswrapper[7620]: I0318 08:54:38.483192 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:38.483291 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:38.483291 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:38.483291 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:38.484453 master-0 kubenswrapper[7620]: I0318 08:54:38.483320 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:39.482987 master-0 kubenswrapper[7620]: I0318 08:54:39.482734 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:39.482987 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:39.482987 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:39.482987 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:39.482987 master-0 kubenswrapper[7620]: I0318 08:54:39.482831 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:40.482216 master-0 kubenswrapper[7620]: I0318 08:54:40.482108 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:40.482216 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:40.482216 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:40.482216 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:40.482216 master-0 kubenswrapper[7620]: I0318 08:54:40.482201 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:41.483088 master-0 kubenswrapper[7620]: I0318 08:54:41.482972 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:41.483088 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:41.483088 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:41.483088 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:41.484111 master-0 kubenswrapper[7620]: I0318 08:54:41.483101 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:42.483235 master-0 kubenswrapper[7620]: I0318 08:54:42.483087 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:42.483235 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:42.483235 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:42.483235 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:42.484138 master-0 kubenswrapper[7620]: I0318 08:54:42.483275 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:43.482082 master-0 kubenswrapper[7620]: I0318 08:54:43.482002 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:43.482082 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:43.482082 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:43.482082 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:43.482562 master-0 kubenswrapper[7620]: I0318 08:54:43.482099 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:44.483164 master-0 kubenswrapper[7620]: I0318 08:54:44.482995 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:44.483164 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:44.483164 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:44.483164 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:44.484517 master-0 kubenswrapper[7620]: I0318 08:54:44.483180 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:45.482392 master-0 kubenswrapper[7620]: I0318 08:54:45.482311 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:45.482392 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:45.482392 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:45.482392 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:45.482727 master-0 kubenswrapper[7620]: I0318 08:54:45.482399 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:46.482950 master-0 kubenswrapper[7620]: I0318 08:54:46.482700 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:46.482950 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:46.482950 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:46.482950 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:46.483763 master-0 kubenswrapper[7620]: I0318 08:54:46.482993 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:47.483487 master-0 kubenswrapper[7620]: I0318 08:54:47.483398 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:47.483487 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:47.483487 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:47.483487 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:47.484691 master-0 kubenswrapper[7620]: I0318 08:54:47.483531 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:48.482768 master-0 kubenswrapper[7620]: I0318 08:54:48.482634 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:48.482768 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:48.482768 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:48.482768 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:48.483504 master-0 kubenswrapper[7620]: I0318 08:54:48.482786 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:49.481911 master-0 kubenswrapper[7620]: I0318 08:54:49.481821 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:49.481911 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:54:49.481911 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:54:49.481911 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:54:49.482819 master-0 kubenswrapper[7620]: I0318 08:54:49.481936 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:49.482819 master-0 kubenswrapper[7620]: I0318 08:54:49.482003 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:54:49.483176 master-0 kubenswrapper[7620]: I0318 08:54:49.483138 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"aebf5a50f9283c726e790a6d4456896088c910f33d1ce0e919e46d41b14e21ad"} pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" containerMessage="Container router failed startup probe, will be restarted" Mar 18 08:54:49.483234 master-0 kubenswrapper[7620]: I0318 08:54:49.483204 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" containerID="cri-o://aebf5a50f9283c726e790a6d4456896088c910f33d1ce0e919e46d41b14e21ad" gracePeriod=3600 Mar 18 08:55:37.153700 master-0 kubenswrapper[7620]: I0318 08:55:37.153642 7620 generic.go:334] "Generic (PLEG): container finished" podID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerID="aebf5a50f9283c726e790a6d4456896088c910f33d1ce0e919e46d41b14e21ad" exitCode=0 Mar 18 08:55:37.154696 master-0 kubenswrapper[7620]: I0318 08:55:37.153721 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerDied","Data":"aebf5a50f9283c726e790a6d4456896088c910f33d1ce0e919e46d41b14e21ad"} Mar 18 08:55:37.154696 master-0 kubenswrapper[7620]: I0318 08:55:37.153823 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerStarted","Data":"504f021a6115c5b248227cad9be5358b605b45e875884611b5163b1993a0ac66"} Mar 18 08:55:37.479809 master-0 kubenswrapper[7620]: I0318 08:55:37.479647 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:55:37.483105 master-0 kubenswrapper[7620]: I0318 08:55:37.483026 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:37.483105 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:37.483105 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:37.483105 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:37.483307 master-0 kubenswrapper[7620]: I0318 08:55:37.483139 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:38.482222 master-0 kubenswrapper[7620]: I0318 08:55:38.482142 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:38.482222 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:38.482222 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:38.482222 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:38.482974 master-0 kubenswrapper[7620]: I0318 08:55:38.482230 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:39.484738 master-0 kubenswrapper[7620]: I0318 08:55:39.484643 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:39.484738 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:39.484738 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:39.484738 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:39.485747 master-0 kubenswrapper[7620]: I0318 08:55:39.484778 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:40.481696 master-0 kubenswrapper[7620]: I0318 08:55:40.481610 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:40.481696 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:40.481696 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:40.481696 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:40.482118 master-0 kubenswrapper[7620]: I0318 08:55:40.481704 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:41.481119 master-0 kubenswrapper[7620]: I0318 08:55:41.481029 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:41.481119 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:41.481119 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:41.481119 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:41.481846 master-0 kubenswrapper[7620]: I0318 08:55:41.481127 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:42.481567 master-0 kubenswrapper[7620]: I0318 08:55:42.481493 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:42.481567 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:42.481567 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:42.481567 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:42.483004 master-0 kubenswrapper[7620]: I0318 08:55:42.481595 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:43.481590 master-0 kubenswrapper[7620]: I0318 08:55:43.481490 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:43.481590 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:43.481590 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:43.481590 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:43.482370 master-0 kubenswrapper[7620]: I0318 08:55:43.481584 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:44.482322 master-0 kubenswrapper[7620]: I0318 08:55:44.482247 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:44.482322 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:44.482322 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:44.482322 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:44.482322 master-0 kubenswrapper[7620]: I0318 08:55:44.482321 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:45.483071 master-0 kubenswrapper[7620]: I0318 08:55:45.482966 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:45.483071 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:45.483071 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:45.483071 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:45.484083 master-0 kubenswrapper[7620]: I0318 08:55:45.483085 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:46.479644 master-0 kubenswrapper[7620]: I0318 08:55:46.479541 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:55:46.482593 master-0 kubenswrapper[7620]: I0318 08:55:46.482522 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:46.482593 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:46.482593 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:46.482593 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:46.483175 master-0 kubenswrapper[7620]: I0318 08:55:46.483139 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:47.482070 master-0 kubenswrapper[7620]: I0318 08:55:47.481980 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:47.482070 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:47.482070 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:47.482070 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:47.482531 master-0 kubenswrapper[7620]: I0318 08:55:47.482105 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:48.482667 master-0 kubenswrapper[7620]: I0318 08:55:48.482572 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:48.482667 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:48.482667 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:48.482667 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:48.484220 master-0 kubenswrapper[7620]: I0318 08:55:48.484165 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:49.481954 master-0 kubenswrapper[7620]: I0318 08:55:49.481883 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:49.481954 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:49.481954 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:49.481954 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:49.482405 master-0 kubenswrapper[7620]: I0318 08:55:49.482003 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:50.482467 master-0 kubenswrapper[7620]: I0318 08:55:50.482363 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:50.482467 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:50.482467 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:50.482467 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:50.483500 master-0 kubenswrapper[7620]: I0318 08:55:50.482498 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:51.488623 master-0 kubenswrapper[7620]: I0318 08:55:51.488528 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:51.488623 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:51.488623 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:51.488623 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:51.489296 master-0 kubenswrapper[7620]: I0318 08:55:51.488723 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:52.482501 master-0 kubenswrapper[7620]: I0318 08:55:52.482421 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:52.482501 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:52.482501 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:52.482501 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:52.482807 master-0 kubenswrapper[7620]: I0318 08:55:52.482531 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:53.482074 master-0 kubenswrapper[7620]: I0318 08:55:53.481966 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:53.482074 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:53.482074 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:53.482074 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:53.482074 master-0 kubenswrapper[7620]: I0318 08:55:53.482076 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:54.482320 master-0 kubenswrapper[7620]: I0318 08:55:54.482243 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:54.482320 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:54.482320 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:54.482320 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:54.483664 master-0 kubenswrapper[7620]: I0318 08:55:54.483615 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:55.481756 master-0 kubenswrapper[7620]: I0318 08:55:55.481654 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:55.481756 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:55.481756 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:55.481756 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:55.482340 master-0 kubenswrapper[7620]: I0318 08:55:55.481772 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:56.482650 master-0 kubenswrapper[7620]: I0318 08:55:56.482569 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:56.482650 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:56.482650 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:56.482650 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:56.483536 master-0 kubenswrapper[7620]: I0318 08:55:56.482707 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:57.167790 master-0 kubenswrapper[7620]: I0318 08:55:57.167667 7620 scope.go:117] "RemoveContainer" containerID="f11de43d97f3eb0705ee274fd9f116f7e697707e7bd79e0504efdd85e51224f7" Mar 18 08:55:57.481170 master-0 kubenswrapper[7620]: I0318 08:55:57.481037 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:57.481170 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:57.481170 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:57.481170 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:57.481585 master-0 kubenswrapper[7620]: I0318 08:55:57.481513 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:58.483485 master-0 kubenswrapper[7620]: I0318 08:55:58.482493 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:58.483485 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:58.483485 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:58.483485 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:58.483485 master-0 kubenswrapper[7620]: I0318 08:55:58.482612 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:59.482224 master-0 kubenswrapper[7620]: I0318 08:55:59.482125 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:59.482224 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:55:59.482224 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:55:59.482224 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:55:59.482692 master-0 kubenswrapper[7620]: I0318 08:55:59.482234 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:00.482229 master-0 kubenswrapper[7620]: I0318 08:56:00.482127 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:00.482229 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:00.482229 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:00.482229 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:00.483471 master-0 kubenswrapper[7620]: I0318 08:56:00.482258 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:01.482424 master-0 kubenswrapper[7620]: I0318 08:56:01.482329 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:01.482424 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:01.482424 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:01.482424 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:01.483453 master-0 kubenswrapper[7620]: I0318 08:56:01.482451 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:02.499551 master-0 kubenswrapper[7620]: I0318 08:56:02.499482 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:02.499551 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:02.499551 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:02.499551 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:02.500154 master-0 kubenswrapper[7620]: I0318 08:56:02.499565 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:03.483833 master-0 kubenswrapper[7620]: I0318 08:56:03.483728 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:03.483833 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:03.483833 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:03.483833 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:03.484253 master-0 kubenswrapper[7620]: I0318 08:56:03.483911 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:04.482209 master-0 kubenswrapper[7620]: I0318 08:56:04.482147 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:04.482209 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:04.482209 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:04.482209 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:04.482824 master-0 kubenswrapper[7620]: I0318 08:56:04.482216 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:05.243538 master-0 kubenswrapper[7620]: I0318 08:56:05.242081 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mpw9b"] Mar 18 08:56:05.243538 master-0 kubenswrapper[7620]: I0318 08:56:05.243077 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 08:56:05.252666 master-0 kubenswrapper[7620]: I0318 08:56:05.252631 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 08:56:05.253200 master-0 kubenswrapper[7620]: I0318 08:56:05.253182 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 08:56:05.253364 master-0 kubenswrapper[7620]: I0318 08:56:05.253305 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-k5mpr" Mar 18 08:56:05.254295 master-0 kubenswrapper[7620]: I0318 08:56:05.254271 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 08:56:05.268425 master-0 kubenswrapper[7620]: I0318 08:56:05.268374 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mpw9b"] Mar 18 08:56:05.363056 master-0 kubenswrapper[7620]: I0318 08:56:05.362994 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttnk9\" (UniqueName: \"kubernetes.io/projected/d0272f7c-bedc-44cf-9790-88e10e6dda03-kube-api-access-ttnk9\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 08:56:05.363601 master-0 kubenswrapper[7620]: I0318 08:56:05.363535 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 08:56:05.393630 master-0 kubenswrapper[7620]: I0318 08:56:05.393555 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/2.log" Mar 18 08:56:05.394637 master-0 kubenswrapper[7620]: I0318 08:56:05.394587 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/1.log" Mar 18 08:56:05.395182 master-0 kubenswrapper[7620]: I0318 08:56:05.395144 7620 generic.go:334] "Generic (PLEG): container finished" podID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" containerID="fad64d39172d17151c921b86e24888209413b262345fa2cee0651c733f8df0a1" exitCode=1 Mar 18 08:56:05.395255 master-0 kubenswrapper[7620]: I0318 08:56:05.395180 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerDied","Data":"fad64d39172d17151c921b86e24888209413b262345fa2cee0651c733f8df0a1"} Mar 18 08:56:05.395255 master-0 kubenswrapper[7620]: I0318 08:56:05.395218 7620 scope.go:117] "RemoveContainer" containerID="458f7a943f236b1eac07ca69624114d084866d6f79f7c12e67735ee4e517390d" Mar 18 08:56:05.396024 master-0 kubenswrapper[7620]: I0318 08:56:05.395974 7620 scope.go:117] "RemoveContainer" containerID="fad64d39172d17151c921b86e24888209413b262345fa2cee0651c733f8df0a1" Mar 18 08:56:05.396526 master-0 kubenswrapper[7620]: E0318 08:56:05.396480 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 08:56:05.465659 master-0 kubenswrapper[7620]: I0318 08:56:05.465598 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 08:56:05.466114 master-0 kubenswrapper[7620]: I0318 08:56:05.466068 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttnk9\" (UniqueName: \"kubernetes.io/projected/d0272f7c-bedc-44cf-9790-88e10e6dda03-kube-api-access-ttnk9\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 08:56:05.473889 master-0 kubenswrapper[7620]: I0318 08:56:05.473832 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 08:56:05.481151 master-0 kubenswrapper[7620]: I0318 08:56:05.481107 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:05.481151 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:05.481151 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:05.481151 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:05.481312 master-0 kubenswrapper[7620]: I0318 08:56:05.481166 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:05.493764 master-0 kubenswrapper[7620]: I0318 08:56:05.493686 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttnk9\" (UniqueName: \"kubernetes.io/projected/d0272f7c-bedc-44cf-9790-88e10e6dda03-kube-api-access-ttnk9\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 08:56:05.596706 master-0 kubenswrapper[7620]: I0318 08:56:05.596631 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 08:56:06.190728 master-0 kubenswrapper[7620]: I0318 08:56:06.190563 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mpw9b"] Mar 18 08:56:06.407339 master-0 kubenswrapper[7620]: I0318 08:56:06.407209 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mpw9b" event={"ID":"d0272f7c-bedc-44cf-9790-88e10e6dda03","Type":"ContainerStarted","Data":"17afa0783f63bbd1436b466f1c9ed291d9ca3a0d5040a017f55d9d9e92c335ac"} Mar 18 08:56:06.407548 master-0 kubenswrapper[7620]: I0318 08:56:06.407352 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mpw9b" event={"ID":"d0272f7c-bedc-44cf-9790-88e10e6dda03","Type":"ContainerStarted","Data":"818594107c19b8863e506e8d4f0498cc1facb30c01ff790168223f67dc1385ac"} Mar 18 08:56:06.410626 master-0 kubenswrapper[7620]: I0318 08:56:06.410602 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/2.log" Mar 18 08:56:06.446244 master-0 kubenswrapper[7620]: I0318 08:56:06.446087 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mpw9b" podStartSLOduration=1.446054429 podStartE2EDuration="1.446054429s" podCreationTimestamp="2026-03-18 08:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:56:06.442998055 +0000 UTC m=+430.437779877" watchObservedRunningTime="2026-03-18 08:56:06.446054429 +0000 UTC m=+430.440836201" Mar 18 08:56:06.482343 master-0 kubenswrapper[7620]: I0318 08:56:06.482259 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:06.482343 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:06.482343 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:06.482343 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:06.482626 master-0 kubenswrapper[7620]: I0318 08:56:06.482413 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:07.482159 master-0 kubenswrapper[7620]: I0318 08:56:07.482064 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:07.482159 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:07.482159 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:07.482159 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:07.483343 master-0 kubenswrapper[7620]: I0318 08:56:07.482178 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:08.481694 master-0 kubenswrapper[7620]: I0318 08:56:08.481586 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:08.481694 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:08.481694 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:08.481694 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:08.482157 master-0 kubenswrapper[7620]: I0318 08:56:08.481720 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:09.482432 master-0 kubenswrapper[7620]: I0318 08:56:09.482212 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:09.482432 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:09.482432 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:09.482432 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:09.482432 master-0 kubenswrapper[7620]: I0318 08:56:09.482365 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:10.482528 master-0 kubenswrapper[7620]: I0318 08:56:10.482452 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:10.482528 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:10.482528 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:10.482528 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:10.487409 master-0 kubenswrapper[7620]: I0318 08:56:10.482573 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:11.481996 master-0 kubenswrapper[7620]: I0318 08:56:11.481939 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:11.481996 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:11.481996 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:11.481996 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:11.482433 master-0 kubenswrapper[7620]: I0318 08:56:11.482017 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:12.482321 master-0 kubenswrapper[7620]: I0318 08:56:12.482215 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:12.482321 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:12.482321 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:12.482321 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:12.483404 master-0 kubenswrapper[7620]: I0318 08:56:12.482339 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:13.482007 master-0 kubenswrapper[7620]: I0318 08:56:13.481914 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:13.482007 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:13.482007 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:13.482007 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:13.482416 master-0 kubenswrapper[7620]: I0318 08:56:13.482030 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:14.483095 master-0 kubenswrapper[7620]: I0318 08:56:14.482980 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:14.483095 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:14.483095 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:14.483095 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:14.483095 master-0 kubenswrapper[7620]: I0318 08:56:14.483096 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:15.483688 master-0 kubenswrapper[7620]: I0318 08:56:15.483591 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:15.483688 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:15.483688 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:15.483688 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:15.484802 master-0 kubenswrapper[7620]: I0318 08:56:15.483722 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:16.483217 master-0 kubenswrapper[7620]: I0318 08:56:16.483160 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:16.483217 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:16.483217 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:16.483217 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:16.483519 master-0 kubenswrapper[7620]: I0318 08:56:16.483234 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:17.483925 master-0 kubenswrapper[7620]: I0318 08:56:17.483780 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:17.483925 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:17.483925 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:17.483925 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:17.485178 master-0 kubenswrapper[7620]: I0318 08:56:17.483944 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:18.224465 master-0 kubenswrapper[7620]: I0318 08:56:18.224422 7620 scope.go:117] "RemoveContainer" containerID="fad64d39172d17151c921b86e24888209413b262345fa2cee0651c733f8df0a1" Mar 18 08:56:18.225074 master-0 kubenswrapper[7620]: E0318 08:56:18.225042 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 08:56:18.483473 master-0 kubenswrapper[7620]: I0318 08:56:18.483418 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:18.483473 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:18.483473 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:18.483473 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:18.484060 master-0 kubenswrapper[7620]: I0318 08:56:18.484009 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:19.481987 master-0 kubenswrapper[7620]: I0318 08:56:19.481795 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:19.481987 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:19.481987 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:19.481987 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:19.482649 master-0 kubenswrapper[7620]: I0318 08:56:19.482024 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:20.482377 master-0 kubenswrapper[7620]: I0318 08:56:20.482199 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:20.482377 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:20.482377 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:20.482377 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:20.482377 master-0 kubenswrapper[7620]: I0318 08:56:20.482324 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:21.482429 master-0 kubenswrapper[7620]: I0318 08:56:21.482324 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:21.482429 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:21.482429 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:21.482429 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:21.483269 master-0 kubenswrapper[7620]: I0318 08:56:21.482464 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:22.483062 master-0 kubenswrapper[7620]: I0318 08:56:22.482923 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:22.483062 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:22.483062 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:22.483062 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:22.483062 master-0 kubenswrapper[7620]: I0318 08:56:22.483039 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:23.482589 master-0 kubenswrapper[7620]: I0318 08:56:23.482475 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:23.482589 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:23.482589 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:23.482589 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:23.482589 master-0 kubenswrapper[7620]: I0318 08:56:23.482606 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:24.481395 master-0 kubenswrapper[7620]: I0318 08:56:24.481292 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:24.481395 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:24.481395 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:24.481395 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:24.481395 master-0 kubenswrapper[7620]: I0318 08:56:24.481380 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:25.482883 master-0 kubenswrapper[7620]: I0318 08:56:25.482776 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:25.482883 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:25.482883 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:25.482883 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:25.483454 master-0 kubenswrapper[7620]: I0318 08:56:25.482925 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:26.481646 master-0 kubenswrapper[7620]: I0318 08:56:26.481549 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:26.481646 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:26.481646 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:26.481646 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:26.482286 master-0 kubenswrapper[7620]: I0318 08:56:26.481677 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:27.483397 master-0 kubenswrapper[7620]: I0318 08:56:27.483302 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:27.483397 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:27.483397 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:27.483397 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:27.484661 master-0 kubenswrapper[7620]: I0318 08:56:27.483420 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:28.483032 master-0 kubenswrapper[7620]: I0318 08:56:28.482909 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:28.483032 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:28.483032 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:28.483032 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:28.484888 master-0 kubenswrapper[7620]: I0318 08:56:28.483080 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:29.482758 master-0 kubenswrapper[7620]: I0318 08:56:29.482644 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:29.482758 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:29.482758 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:29.482758 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:29.482758 master-0 kubenswrapper[7620]: I0318 08:56:29.482749 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:30.484744 master-0 kubenswrapper[7620]: I0318 08:56:30.484616 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:30.484744 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:30.484744 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:30.484744 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:30.486069 master-0 kubenswrapper[7620]: I0318 08:56:30.484754 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:31.483554 master-0 kubenswrapper[7620]: I0318 08:56:31.483296 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:31.483554 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:31.483554 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:31.483554 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:31.483554 master-0 kubenswrapper[7620]: I0318 08:56:31.483444 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:32.482670 master-0 kubenswrapper[7620]: I0318 08:56:32.482585 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:32.482670 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:32.482670 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:32.482670 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:32.482670 master-0 kubenswrapper[7620]: I0318 08:56:32.482677 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:33.226863 master-0 kubenswrapper[7620]: I0318 08:56:33.226766 7620 scope.go:117] "RemoveContainer" containerID="fad64d39172d17151c921b86e24888209413b262345fa2cee0651c733f8df0a1" Mar 18 08:56:33.482033 master-0 kubenswrapper[7620]: I0318 08:56:33.481847 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:33.482033 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:33.482033 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:33.482033 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:33.482033 master-0 kubenswrapper[7620]: I0318 08:56:33.481962 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:34.484138 master-0 kubenswrapper[7620]: I0318 08:56:34.484030 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:34.484138 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:34.484138 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:34.484138 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:34.485237 master-0 kubenswrapper[7620]: I0318 08:56:34.484146 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:34.647895 master-0 kubenswrapper[7620]: I0318 08:56:34.645506 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/2.log" Mar 18 08:56:34.651885 master-0 kubenswrapper[7620]: I0318 08:56:34.649267 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerStarted","Data":"1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11"} Mar 18 08:56:35.482836 master-0 kubenswrapper[7620]: I0318 08:56:35.482761 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:35.482836 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:35.482836 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:35.482836 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:35.483286 master-0 kubenswrapper[7620]: I0318 08:56:35.482888 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:36.481879 master-0 kubenswrapper[7620]: I0318 08:56:36.481713 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:36.481879 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:36.481879 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:36.481879 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:36.481879 master-0 kubenswrapper[7620]: I0318 08:56:36.481824 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:37.482235 master-0 kubenswrapper[7620]: I0318 08:56:37.482158 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:37.482235 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:37.482235 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:37.482235 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:37.482959 master-0 kubenswrapper[7620]: I0318 08:56:37.482272 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:38.483395 master-0 kubenswrapper[7620]: I0318 08:56:38.483276 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:38.483395 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:38.483395 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:38.483395 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:38.484532 master-0 kubenswrapper[7620]: I0318 08:56:38.483436 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:39.482451 master-0 kubenswrapper[7620]: I0318 08:56:39.482354 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:39.482451 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:39.482451 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:39.482451 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:39.483012 master-0 kubenswrapper[7620]: I0318 08:56:39.482455 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:40.482697 master-0 kubenswrapper[7620]: I0318 08:56:40.482570 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:40.482697 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:40.482697 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:40.482697 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:40.483688 master-0 kubenswrapper[7620]: I0318 08:56:40.482735 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:41.481641 master-0 kubenswrapper[7620]: I0318 08:56:41.481578 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:41.481641 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:41.481641 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:41.481641 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:41.481990 master-0 kubenswrapper[7620]: I0318 08:56:41.481678 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:42.484119 master-0 kubenswrapper[7620]: I0318 08:56:42.483985 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:42.484119 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:42.484119 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:42.484119 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:42.485241 master-0 kubenswrapper[7620]: I0318 08:56:42.484134 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:43.483122 master-0 kubenswrapper[7620]: I0318 08:56:43.483030 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:43.483122 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:43.483122 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:43.483122 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:43.483572 master-0 kubenswrapper[7620]: I0318 08:56:43.483125 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:44.480879 master-0 kubenswrapper[7620]: I0318 08:56:44.480810 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:44.480879 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:44.480879 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:44.480879 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:44.481543 master-0 kubenswrapper[7620]: I0318 08:56:44.480911 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:45.482492 master-0 kubenswrapper[7620]: I0318 08:56:45.482425 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:45.482492 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:45.482492 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:45.482492 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:45.483606 master-0 kubenswrapper[7620]: I0318 08:56:45.482527 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:46.482583 master-0 kubenswrapper[7620]: I0318 08:56:46.482524 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:46.482583 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:46.482583 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:46.482583 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:46.483513 master-0 kubenswrapper[7620]: I0318 08:56:46.482605 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:47.483275 master-0 kubenswrapper[7620]: I0318 08:56:47.483177 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:47.483275 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:47.483275 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:47.483275 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:47.484112 master-0 kubenswrapper[7620]: I0318 08:56:47.483298 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:48.482786 master-0 kubenswrapper[7620]: I0318 08:56:48.482670 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:48.482786 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:48.482786 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:48.482786 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:48.483261 master-0 kubenswrapper[7620]: I0318 08:56:48.482834 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:49.482310 master-0 kubenswrapper[7620]: I0318 08:56:49.482211 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:49.482310 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:49.482310 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:49.482310 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:49.483280 master-0 kubenswrapper[7620]: I0318 08:56:49.482316 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:50.483160 master-0 kubenswrapper[7620]: I0318 08:56:50.483040 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:50.483160 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:50.483160 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:50.483160 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:50.484245 master-0 kubenswrapper[7620]: I0318 08:56:50.483181 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:51.481807 master-0 kubenswrapper[7620]: I0318 08:56:51.481723 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:51.481807 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:51.481807 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:51.481807 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:51.482282 master-0 kubenswrapper[7620]: I0318 08:56:51.481834 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:52.483779 master-0 kubenswrapper[7620]: I0318 08:56:52.483668 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:52.483779 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:52.483779 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:52.483779 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:52.485127 master-0 kubenswrapper[7620]: I0318 08:56:52.483824 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:53.482430 master-0 kubenswrapper[7620]: I0318 08:56:53.482336 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:53.482430 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:53.482430 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:53.482430 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:53.483059 master-0 kubenswrapper[7620]: I0318 08:56:53.482457 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:54.482951 master-0 kubenswrapper[7620]: I0318 08:56:54.482879 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:54.482951 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:54.482951 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:54.482951 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:54.484034 master-0 kubenswrapper[7620]: I0318 08:56:54.483976 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:55.481313 master-0 kubenswrapper[7620]: I0318 08:56:55.481235 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:55.481313 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:55.481313 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:55.481313 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:55.481651 master-0 kubenswrapper[7620]: I0318 08:56:55.481340 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:56.481683 master-0 kubenswrapper[7620]: I0318 08:56:56.481612 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:56.481683 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:56.481683 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:56.481683 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:56.482307 master-0 kubenswrapper[7620]: I0318 08:56:56.481697 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:57.218090 master-0 kubenswrapper[7620]: I0318 08:56:57.217998 7620 scope.go:117] "RemoveContainer" containerID="963e77396932fd5dde20fd2229477fc2520d4deed14e4daee66a481b11a60005" Mar 18 08:56:57.250919 master-0 kubenswrapper[7620]: I0318 08:56:57.250841 7620 scope.go:117] "RemoveContainer" containerID="515eb31f006a3681b4b8a4d7b68b6a09e8acc9b88a57a1196829487e2994618c" Mar 18 08:56:57.482691 master-0 kubenswrapper[7620]: I0318 08:56:57.482510 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:57.482691 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:57.482691 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:57.482691 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:57.483708 master-0 kubenswrapper[7620]: I0318 08:56:57.483669 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:58.483128 master-0 kubenswrapper[7620]: I0318 08:56:58.483013 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:58.483128 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:58.483128 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:58.483128 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:58.484002 master-0 kubenswrapper[7620]: I0318 08:56:58.483155 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:59.483010 master-0 kubenswrapper[7620]: I0318 08:56:59.482882 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:59.483010 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:56:59.483010 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:56:59.483010 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:56:59.484337 master-0 kubenswrapper[7620]: I0318 08:56:59.483048 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:00.482317 master-0 kubenswrapper[7620]: I0318 08:57:00.482218 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:00.482317 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:00.482317 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:00.482317 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:00.483034 master-0 kubenswrapper[7620]: I0318 08:57:00.482375 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:01.483206 master-0 kubenswrapper[7620]: I0318 08:57:01.483135 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:01.483206 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:01.483206 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:01.483206 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:01.484258 master-0 kubenswrapper[7620]: I0318 08:57:01.483222 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:02.482850 master-0 kubenswrapper[7620]: I0318 08:57:02.482723 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:02.482850 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:02.482850 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:02.482850 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:02.483915 master-0 kubenswrapper[7620]: I0318 08:57:02.482896 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:03.481758 master-0 kubenswrapper[7620]: I0318 08:57:03.481686 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:03.481758 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:03.481758 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:03.481758 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:03.482258 master-0 kubenswrapper[7620]: I0318 08:57:03.481777 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:04.328632 master-0 kubenswrapper[7620]: I0318 08:57:04.328575 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vlc2m"] Mar 18 08:57:04.330686 master-0 kubenswrapper[7620]: I0318 08:57:04.330658 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.333806 master-0 kubenswrapper[7620]: I0318 08:57:04.333745 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 18 08:57:04.334377 master-0 kubenswrapper[7620]: I0318 08:57:04.334336 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-r9zx4" Mar 18 08:57:04.413521 master-0 kubenswrapper[7620]: I0318 08:57:04.413471 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.413793 master-0 kubenswrapper[7620]: I0318 08:57:04.413770 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.413932 master-0 kubenswrapper[7620]: I0318 08:57:04.413917 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5lt\" (UniqueName: \"kubernetes.io/projected/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-kube-api-access-lg5lt\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.414091 master-0 kubenswrapper[7620]: I0318 08:57:04.414077 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-ready\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.481533 master-0 kubenswrapper[7620]: I0318 08:57:04.481463 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:04.481533 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:04.481533 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:04.481533 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:04.481809 master-0 kubenswrapper[7620]: I0318 08:57:04.481581 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:04.515197 master-0 kubenswrapper[7620]: I0318 08:57:04.515127 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-ready\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.515197 master-0 kubenswrapper[7620]: I0318 08:57:04.515184 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.515197 master-0 kubenswrapper[7620]: I0318 08:57:04.515214 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.515623 master-0 kubenswrapper[7620]: I0318 08:57:04.515232 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg5lt\" (UniqueName: \"kubernetes.io/projected/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-kube-api-access-lg5lt\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.515939 master-0 kubenswrapper[7620]: I0318 08:57:04.515907 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-ready\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.516081 master-0 kubenswrapper[7620]: I0318 08:57:04.516028 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.516701 master-0 kubenswrapper[7620]: I0318 08:57:04.516658 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.530935 master-0 kubenswrapper[7620]: I0318 08:57:04.530882 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg5lt\" (UniqueName: \"kubernetes.io/projected/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-kube-api-access-lg5lt\") pod \"cni-sysctl-allowlist-ds-vlc2m\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.649751 master-0 kubenswrapper[7620]: I0318 08:57:04.649569 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:04.887763 master-0 kubenswrapper[7620]: I0318 08:57:04.887685 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" event={"ID":"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301","Type":"ContainerStarted","Data":"1d6d3be968381e4a2c751988f41503339fd8e8b9a7db9e854b1829b80d4f3b1a"} Mar 18 08:57:05.482150 master-0 kubenswrapper[7620]: I0318 08:57:05.482075 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:05.482150 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:05.482150 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:05.482150 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:05.482889 master-0 kubenswrapper[7620]: I0318 08:57:05.482181 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:05.897931 master-0 kubenswrapper[7620]: I0318 08:57:05.897791 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" event={"ID":"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301","Type":"ContainerStarted","Data":"13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89"} Mar 18 08:57:05.898380 master-0 kubenswrapper[7620]: I0318 08:57:05.898328 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:05.927142 master-0 kubenswrapper[7620]: I0318 08:57:05.927013 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:05.932790 master-0 kubenswrapper[7620]: I0318 08:57:05.932688 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" podStartSLOduration=1.932658364 podStartE2EDuration="1.932658364s" podCreationTimestamp="2026-03-18 08:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:57:05.924699037 +0000 UTC m=+489.919480789" watchObservedRunningTime="2026-03-18 08:57:05.932658364 +0000 UTC m=+489.927440156" Mar 18 08:57:06.320650 master-0 kubenswrapper[7620]: I0318 08:57:06.320569 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vlc2m"] Mar 18 08:57:06.486167 master-0 kubenswrapper[7620]: I0318 08:57:06.486084 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:06.486167 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:06.486167 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:06.486167 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:06.486878 master-0 kubenswrapper[7620]: I0318 08:57:06.486219 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:07.482002 master-0 kubenswrapper[7620]: I0318 08:57:07.481928 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:07.482002 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:07.482002 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:07.482002 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:07.482393 master-0 kubenswrapper[7620]: I0318 08:57:07.482036 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:07.912233 master-0 kubenswrapper[7620]: I0318 08:57:07.912150 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" podUID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" gracePeriod=30 Mar 18 08:57:08.482479 master-0 kubenswrapper[7620]: I0318 08:57:08.482423 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:08.482479 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:08.482479 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:08.482479 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:08.482968 master-0 kubenswrapper[7620]: I0318 08:57:08.482928 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:09.482087 master-0 kubenswrapper[7620]: I0318 08:57:09.481967 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:09.482087 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:09.482087 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:09.482087 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:09.482087 master-0 kubenswrapper[7620]: I0318 08:57:09.482097 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:10.481466 master-0 kubenswrapper[7620]: I0318 08:57:10.481400 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:10.481466 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:10.481466 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:10.481466 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:10.483215 master-0 kubenswrapper[7620]: I0318 08:57:10.482448 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:11.464514 master-0 kubenswrapper[7620]: I0318 08:57:11.464448 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-5d4d5995f-s5dw8"] Mar 18 08:57:11.465919 master-0 kubenswrapper[7620]: I0318 08:57:11.465890 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.468994 master-0 kubenswrapper[7620]: I0318 08:57:11.468944 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 08:57:11.469070 master-0 kubenswrapper[7620]: I0318 08:57:11.469041 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 08:57:11.469211 master-0 kubenswrapper[7620]: I0318 08:57:11.469188 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 08:57:11.469256 master-0 kubenswrapper[7620]: I0318 08:57:11.469234 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 08:57:11.469489 master-0 kubenswrapper[7620]: I0318 08:57:11.469455 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-xs8t8" Mar 18 08:57:11.470303 master-0 kubenswrapper[7620]: I0318 08:57:11.470275 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 08:57:11.480546 master-0 kubenswrapper[7620]: I0318 08:57:11.480480 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:11.480546 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:11.480546 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:11.480546 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:11.480836 master-0 kubenswrapper[7620]: I0318 08:57:11.480554 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:11.484221 master-0 kubenswrapper[7620]: I0318 08:57:11.484183 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 08:57:11.485836 master-0 kubenswrapper[7620]: I0318 08:57:11.485786 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5d4d5995f-s5dw8"] Mar 18 08:57:11.536711 master-0 kubenswrapper[7620]: I0318 08:57:11.536631 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.536711 master-0 kubenswrapper[7620]: I0318 08:57:11.536699 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.536711 master-0 kubenswrapper[7620]: I0318 08:57:11.536722 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.537140 master-0 kubenswrapper[7620]: I0318 08:57:11.536741 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.537140 master-0 kubenswrapper[7620]: I0318 08:57:11.536818 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.537140 master-0 kubenswrapper[7620]: I0318 08:57:11.536994 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fql4\" (UniqueName: \"kubernetes.io/projected/e5ae1886-f90c-49f4-bf08-055b55dd785a-kube-api-access-4fql4\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.537140 master-0 kubenswrapper[7620]: I0318 08:57:11.537055 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.537140 master-0 kubenswrapper[7620]: I0318 08:57:11.537146 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.639011 master-0 kubenswrapper[7620]: I0318 08:57:11.638935 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fql4\" (UniqueName: \"kubernetes.io/projected/e5ae1886-f90c-49f4-bf08-055b55dd785a-kube-api-access-4fql4\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.639342 master-0 kubenswrapper[7620]: I0318 08:57:11.639043 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.639342 master-0 kubenswrapper[7620]: I0318 08:57:11.639117 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.639342 master-0 kubenswrapper[7620]: I0318 08:57:11.639171 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.639342 master-0 kubenswrapper[7620]: I0318 08:57:11.639222 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.639342 master-0 kubenswrapper[7620]: I0318 08:57:11.639272 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.639342 master-0 kubenswrapper[7620]: I0318 08:57:11.639314 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.639558 master-0 kubenswrapper[7620]: I0318 08:57:11.639359 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.641767 master-0 kubenswrapper[7620]: I0318 08:57:11.641718 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.643087 master-0 kubenswrapper[7620]: I0318 08:57:11.643036 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.643426 master-0 kubenswrapper[7620]: I0318 08:57:11.643388 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.646524 master-0 kubenswrapper[7620]: I0318 08:57:11.646496 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.647134 master-0 kubenswrapper[7620]: I0318 08:57:11.647096 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.649212 master-0 kubenswrapper[7620]: I0318 08:57:11.649150 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.650883 master-0 kubenswrapper[7620]: I0318 08:57:11.650446 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.662492 master-0 kubenswrapper[7620]: I0318 08:57:11.662433 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fql4\" (UniqueName: \"kubernetes.io/projected/e5ae1886-f90c-49f4-bf08-055b55dd785a-kube-api-access-4fql4\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:11.787466 master-0 kubenswrapper[7620]: I0318 08:57:11.787387 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 08:57:12.278659 master-0 kubenswrapper[7620]: I0318 08:57:12.278604 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5d4d5995f-s5dw8"] Mar 18 08:57:12.288554 master-0 kubenswrapper[7620]: W0318 08:57:12.288491 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5ae1886_f90c_49f4_bf08_055b55dd785a.slice/crio-f3e26fe3d2ca6df6dc0161bddc1b304ebbc7fa75a6def1dd10d9bdbbd5e6b79d WatchSource:0}: Error finding container f3e26fe3d2ca6df6dc0161bddc1b304ebbc7fa75a6def1dd10d9bdbbd5e6b79d: Status 404 returned error can't find the container with id f3e26fe3d2ca6df6dc0161bddc1b304ebbc7fa75a6def1dd10d9bdbbd5e6b79d Mar 18 08:57:12.291677 master-0 kubenswrapper[7620]: I0318 08:57:12.291634 7620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 08:57:12.482831 master-0 kubenswrapper[7620]: I0318 08:57:12.482702 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:12.482831 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:12.482831 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:12.482831 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:12.483133 master-0 kubenswrapper[7620]: I0318 08:57:12.482918 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:12.671762 master-0 kubenswrapper[7620]: I0318 08:57:12.671562 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 08:57:12.672846 master-0 kubenswrapper[7620]: I0318 08:57:12.672792 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.676745 master-0 kubenswrapper[7620]: I0318 08:57:12.676649 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v6k2v" Mar 18 08:57:12.678126 master-0 kubenswrapper[7620]: I0318 08:57:12.678086 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 08:57:12.693355 master-0 kubenswrapper[7620]: I0318 08:57:12.693276 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 08:57:12.755498 master-0 kubenswrapper[7620]: I0318 08:57:12.755424 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.755835 master-0 kubenswrapper[7620]: I0318 08:57:12.755515 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kube-api-access\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.755835 master-0 kubenswrapper[7620]: I0318 08:57:12.755542 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-var-lock\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.857346 master-0 kubenswrapper[7620]: I0318 08:57:12.857247 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kube-api-access\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.857346 master-0 kubenswrapper[7620]: I0318 08:57:12.857356 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-var-lock\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.857746 master-0 kubenswrapper[7620]: I0318 08:57:12.857532 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.857746 master-0 kubenswrapper[7620]: I0318 08:57:12.857561 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-var-lock\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.857746 master-0 kubenswrapper[7620]: I0318 08:57:12.857668 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.886217 master-0 kubenswrapper[7620]: I0318 08:57:12.886132 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kube-api-access\") pod \"installer-3-master-0\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:12.952901 master-0 kubenswrapper[7620]: I0318 08:57:12.952507 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" event={"ID":"e5ae1886-f90c-49f4-bf08-055b55dd785a","Type":"ContainerStarted","Data":"f3e26fe3d2ca6df6dc0161bddc1b304ebbc7fa75a6def1dd10d9bdbbd5e6b79d"} Mar 18 08:57:13.003333 master-0 kubenswrapper[7620]: I0318 08:57:13.003233 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:13.469253 master-0 kubenswrapper[7620]: I0318 08:57:13.469168 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 08:57:13.473712 master-0 kubenswrapper[7620]: W0318 08:57:13.473637 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod62a1fcda_ce2f_4d14_ab37_10a21e30fc30.slice/crio-13017e08077deeefc07c7fe44f54a64a8b6b49173dc26b6f0df3026587c8b3ff WatchSource:0}: Error finding container 13017e08077deeefc07c7fe44f54a64a8b6b49173dc26b6f0df3026587c8b3ff: Status 404 returned error can't find the container with id 13017e08077deeefc07c7fe44f54a64a8b6b49173dc26b6f0df3026587c8b3ff Mar 18 08:57:13.482296 master-0 kubenswrapper[7620]: I0318 08:57:13.482219 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:13.482296 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:13.482296 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:13.482296 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:13.482615 master-0 kubenswrapper[7620]: I0318 08:57:13.482299 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:13.906159 master-0 kubenswrapper[7620]: I0318 08:57:13.906057 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-zgrts"] Mar 18 08:57:13.910914 master-0 kubenswrapper[7620]: I0318 08:57:13.907301 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 08:57:13.910914 master-0 kubenswrapper[7620]: I0318 08:57:13.910231 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-svhdx" Mar 18 08:57:13.934972 master-0 kubenswrapper[7620]: I0318 08:57:13.934670 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-zgrts"] Mar 18 08:57:13.982596 master-0 kubenswrapper[7620]: I0318 08:57:13.982516 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62a1fcda-ce2f-4d14-ab37-10a21e30fc30","Type":"ContainerStarted","Data":"08088f866063b071982a4841fdee97faaded7e31cf8cc32d7754eb48aa28135c"} Mar 18 08:57:13.982596 master-0 kubenswrapper[7620]: I0318 08:57:13.982588 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62a1fcda-ce2f-4d14-ab37-10a21e30fc30","Type":"ContainerStarted","Data":"13017e08077deeefc07c7fe44f54a64a8b6b49173dc26b6f0df3026587c8b3ff"} Mar 18 08:57:13.985913 master-0 kubenswrapper[7620]: I0318 08:57:13.983759 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 08:57:13.985913 master-0 kubenswrapper[7620]: I0318 08:57:13.984159 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks4jl\" (UniqueName: \"kubernetes.io/projected/e0bb044f-5a4e-4981-8084-91348ce1a56a-kube-api-access-ks4jl\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 08:57:14.006068 master-0 kubenswrapper[7620]: I0318 08:57:14.005960 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.005937328 podStartE2EDuration="2.005937328s" podCreationTimestamp="2026-03-18 08:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:57:14.004394236 +0000 UTC m=+497.999175988" watchObservedRunningTime="2026-03-18 08:57:14.005937328 +0000 UTC m=+498.000719090" Mar 18 08:57:14.086137 master-0 kubenswrapper[7620]: I0318 08:57:14.086034 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 08:57:14.086137 master-0 kubenswrapper[7620]: I0318 08:57:14.086115 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks4jl\" (UniqueName: \"kubernetes.io/projected/e0bb044f-5a4e-4981-8084-91348ce1a56a-kube-api-access-ks4jl\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 08:57:14.090678 master-0 kubenswrapper[7620]: I0318 08:57:14.090619 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 08:57:14.105899 master-0 kubenswrapper[7620]: I0318 08:57:14.105818 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks4jl\" (UniqueName: \"kubernetes.io/projected/e0bb044f-5a4e-4981-8084-91348ce1a56a-kube-api-access-ks4jl\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 08:57:14.233548 master-0 kubenswrapper[7620]: I0318 08:57:14.233483 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 08:57:14.482526 master-0 kubenswrapper[7620]: I0318 08:57:14.482448 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:14.482526 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:14.482526 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:14.482526 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:14.482788 master-0 kubenswrapper[7620]: I0318 08:57:14.482563 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:14.653034 master-0 kubenswrapper[7620]: E0318 08:57:14.652850 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:14.654358 master-0 kubenswrapper[7620]: E0318 08:57:14.654300 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:14.655666 master-0 kubenswrapper[7620]: E0318 08:57:14.655596 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:14.655726 master-0 kubenswrapper[7620]: E0318 08:57:14.655691 7620 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" podUID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" containerName="kube-multus-additional-cni-plugins" Mar 18 08:57:15.483911 master-0 kubenswrapper[7620]: I0318 08:57:15.483273 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:15.483911 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:15.483911 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:15.483911 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:15.483911 master-0 kubenswrapper[7620]: I0318 08:57:15.483360 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:15.890001 master-0 kubenswrapper[7620]: I0318 08:57:15.887180 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-zgrts"] Mar 18 08:57:15.890001 master-0 kubenswrapper[7620]: W0318 08:57:15.889340 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bb044f_5a4e_4981_8084_91348ce1a56a.slice/crio-a9d070f228bb3ad86327355b7631ce9d61aa33df655c8f354c0c3cf73e6bbfbd WatchSource:0}: Error finding container a9d070f228bb3ad86327355b7631ce9d61aa33df655c8f354c0c3cf73e6bbfbd: Status 404 returned error can't find the container with id a9d070f228bb3ad86327355b7631ce9d61aa33df655c8f354c0c3cf73e6bbfbd Mar 18 08:57:16.003572 master-0 kubenswrapper[7620]: I0318 08:57:16.003485 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" event={"ID":"e5ae1886-f90c-49f4-bf08-055b55dd785a","Type":"ContainerStarted","Data":"28199cbad23e4576060b20c16c3fe518bdacde21e15158c769a46aeef210dcdf"} Mar 18 08:57:16.005420 master-0 kubenswrapper[7620]: I0318 08:57:16.005377 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" event={"ID":"e0bb044f-5a4e-4981-8084-91348ce1a56a","Type":"ContainerStarted","Data":"a9d070f228bb3ad86327355b7631ce9d61aa33df655c8f354c0c3cf73e6bbfbd"} Mar 18 08:57:16.480967 master-0 kubenswrapper[7620]: I0318 08:57:16.480908 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:16.480967 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:16.480967 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:16.480967 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:16.481364 master-0 kubenswrapper[7620]: I0318 08:57:16.480975 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:17.018135 master-0 kubenswrapper[7620]: I0318 08:57:17.018076 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" event={"ID":"e5ae1886-f90c-49f4-bf08-055b55dd785a","Type":"ContainerStarted","Data":"726d59a2848ea80181a69e4b302b7614ff7fb96e89c13ea68020a2a3653654d2"} Mar 18 08:57:17.021137 master-0 kubenswrapper[7620]: I0318 08:57:17.021080 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" event={"ID":"e0bb044f-5a4e-4981-8084-91348ce1a56a","Type":"ContainerStarted","Data":"aa15b2fed45c129a7b2399706882aeb8bc3dd3b408d7a369269948c9bf1ecc51"} Mar 18 08:57:17.021137 master-0 kubenswrapper[7620]: I0318 08:57:17.021137 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" event={"ID":"e0bb044f-5a4e-4981-8084-91348ce1a56a","Type":"ContainerStarted","Data":"5402d6aa584f6478c330df8e99050a06554972617b015b1cdd202b86ba72a59f"} Mar 18 08:57:17.046386 master-0 kubenswrapper[7620]: I0318 08:57:17.046307 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" podStartSLOduration=4.046281088 podStartE2EDuration="4.046281088s" podCreationTimestamp="2026-03-18 08:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:57:17.041521918 +0000 UTC m=+501.036303690" watchObservedRunningTime="2026-03-18 08:57:17.046281088 +0000 UTC m=+501.041062840" Mar 18 08:57:17.078593 master-0 kubenswrapper[7620]: I0318 08:57:17.078533 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64"] Mar 18 08:57:17.087876 master-0 kubenswrapper[7620]: I0318 08:57:17.078958 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerName="multus-admission-controller" containerID="cri-o://5b25a8863a8b00bc7ec87b8ae1e2369b0a538d5870570f98557275e350c88a96" gracePeriod=30 Mar 18 08:57:17.087876 master-0 kubenswrapper[7620]: I0318 08:57:17.079019 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerName="kube-rbac-proxy" containerID="cri-o://864587bb9e1c050127a06a72af052047508fc19256a176a3926da44e091eec45" gracePeriod=30 Mar 18 08:57:17.481739 master-0 kubenswrapper[7620]: I0318 08:57:17.481538 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:17.481739 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:17.481739 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:17.481739 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:17.481739 master-0 kubenswrapper[7620]: I0318 08:57:17.481608 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:18.030960 master-0 kubenswrapper[7620]: I0318 08:57:18.030890 7620 generic.go:334] "Generic (PLEG): container finished" podID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerID="864587bb9e1c050127a06a72af052047508fc19256a176a3926da44e091eec45" exitCode=0 Mar 18 08:57:18.030960 master-0 kubenswrapper[7620]: I0318 08:57:18.030966 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" event={"ID":"159a26f5-3cfc-4db2-88e9-bff5d8a613fc","Type":"ContainerDied","Data":"864587bb9e1c050127a06a72af052047508fc19256a176a3926da44e091eec45"} Mar 18 08:57:18.033310 master-0 kubenswrapper[7620]: I0318 08:57:18.033250 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" event={"ID":"e5ae1886-f90c-49f4-bf08-055b55dd785a","Type":"ContainerStarted","Data":"a64951ee68a5b39650cdb73e9281b12b89222c085d318ca11020a6ccc86887f5"} Mar 18 08:57:18.067558 master-0 kubenswrapper[7620]: I0318 08:57:18.067398 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" podStartSLOduration=2.512929743 podStartE2EDuration="7.067363067s" podCreationTimestamp="2026-03-18 08:57:11 +0000 UTC" firstStartedPulling="2026-03-18 08:57:12.291552224 +0000 UTC m=+496.286334016" lastFinishedPulling="2026-03-18 08:57:16.845985568 +0000 UTC m=+500.840767340" observedRunningTime="2026-03-18 08:57:18.065949838 +0000 UTC m=+502.060731690" watchObservedRunningTime="2026-03-18 08:57:18.067363067 +0000 UTC m=+502.062144859" Mar 18 08:57:18.481875 master-0 kubenswrapper[7620]: I0318 08:57:18.481668 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:18.481875 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:18.481875 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:18.481875 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:18.481875 master-0 kubenswrapper[7620]: I0318 08:57:18.481747 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:19.481786 master-0 kubenswrapper[7620]: I0318 08:57:19.481700 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:19.481786 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:19.481786 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:19.481786 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:19.481786 master-0 kubenswrapper[7620]: I0318 08:57:19.481789 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:20.483626 master-0 kubenswrapper[7620]: I0318 08:57:20.483527 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:20.483626 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:20.483626 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:20.483626 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:20.484371 master-0 kubenswrapper[7620]: I0318 08:57:20.483640 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:21.482331 master-0 kubenswrapper[7620]: I0318 08:57:21.482242 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:21.482331 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:21.482331 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:21.482331 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:21.482331 master-0 kubenswrapper[7620]: I0318 08:57:21.482325 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:22.481625 master-0 kubenswrapper[7620]: I0318 08:57:22.481523 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:22.481625 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:22.481625 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:22.481625 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:22.481625 master-0 kubenswrapper[7620]: I0318 08:57:22.481597 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:23.483280 master-0 kubenswrapper[7620]: I0318 08:57:23.483184 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:23.483280 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:23.483280 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:23.483280 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:23.484609 master-0 kubenswrapper[7620]: I0318 08:57:23.483290 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:24.319918 master-0 kubenswrapper[7620]: I0318 08:57:24.319816 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 08:57:24.321453 master-0 kubenswrapper[7620]: I0318 08:57:24.321409 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.325105 master-0 kubenswrapper[7620]: I0318 08:57:24.325065 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 08:57:24.325441 master-0 kubenswrapper[7620]: I0318 08:57:24.325413 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-r7k8l" Mar 18 08:57:24.334606 master-0 kubenswrapper[7620]: I0318 08:57:24.334550 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 08:57:24.410885 master-0 kubenswrapper[7620]: I0318 08:57:24.410781 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-var-lock\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.410885 master-0 kubenswrapper[7620]: I0318 08:57:24.410881 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.411153 master-0 kubenswrapper[7620]: I0318 08:57:24.410904 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/005a0b4c-8e2d-4483-98e9-55badf7099c5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.482595 master-0 kubenswrapper[7620]: I0318 08:57:24.482505 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:24.482595 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:24.482595 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:24.482595 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:24.482959 master-0 kubenswrapper[7620]: I0318 08:57:24.482600 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:24.512279 master-0 kubenswrapper[7620]: I0318 08:57:24.512204 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-var-lock\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.512279 master-0 kubenswrapper[7620]: I0318 08:57:24.512279 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.512873 master-0 kubenswrapper[7620]: I0318 08:57:24.512305 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/005a0b4c-8e2d-4483-98e9-55badf7099c5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.512873 master-0 kubenswrapper[7620]: I0318 08:57:24.512387 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-var-lock\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.512873 master-0 kubenswrapper[7620]: I0318 08:57:24.512512 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.530717 master-0 kubenswrapper[7620]: I0318 08:57:24.530664 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/005a0b4c-8e2d-4483-98e9-55badf7099c5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:24.654786 master-0 kubenswrapper[7620]: E0318 08:57:24.654627 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:24.656432 master-0 kubenswrapper[7620]: E0318 08:57:24.656354 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:24.657992 master-0 kubenswrapper[7620]: E0318 08:57:24.657946 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:24.658136 master-0 kubenswrapper[7620]: E0318 08:57:24.658108 7620 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" podUID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" containerName="kube-multus-additional-cni-plugins" Mar 18 08:57:24.669544 master-0 kubenswrapper[7620]: I0318 08:57:24.669494 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:25.080488 master-0 kubenswrapper[7620]: I0318 08:57:25.080423 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 08:57:25.125047 master-0 kubenswrapper[7620]: I0318 08:57:25.124972 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"005a0b4c-8e2d-4483-98e9-55badf7099c5","Type":"ContainerStarted","Data":"6ceebd5fc2e20325f9aee4b93a902553c4a60d97de2a44d71188013bb71ab91c"} Mar 18 08:57:25.481531 master-0 kubenswrapper[7620]: I0318 08:57:25.481457 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:25.481531 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:25.481531 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:25.481531 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:25.481531 master-0 kubenswrapper[7620]: I0318 08:57:25.481530 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:26.136273 master-0 kubenswrapper[7620]: I0318 08:57:26.136164 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"005a0b4c-8e2d-4483-98e9-55badf7099c5","Type":"ContainerStarted","Data":"83c1b5b71c6b991cce706c7d71cc023db485e610df2dae94288a380e76fcfca1"} Mar 18 08:57:26.164089 master-0 kubenswrapper[7620]: I0318 08:57:26.163952 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.163910576 podStartE2EDuration="2.163910576s" podCreationTimestamp="2026-03-18 08:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:57:26.160937795 +0000 UTC m=+510.155719577" watchObservedRunningTime="2026-03-18 08:57:26.163910576 +0000 UTC m=+510.158692398" Mar 18 08:57:26.481904 master-0 kubenswrapper[7620]: I0318 08:57:26.481694 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:26.481904 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:26.481904 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:26.481904 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:26.481904 master-0 kubenswrapper[7620]: I0318 08:57:26.481780 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:27.483218 master-0 kubenswrapper[7620]: I0318 08:57:27.483119 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:27.483218 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:27.483218 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:27.483218 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:27.484476 master-0 kubenswrapper[7620]: I0318 08:57:27.483222 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:28.482548 master-0 kubenswrapper[7620]: I0318 08:57:28.482463 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:28.482548 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:28.482548 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:28.482548 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:28.482968 master-0 kubenswrapper[7620]: I0318 08:57:28.482555 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:29.483137 master-0 kubenswrapper[7620]: I0318 08:57:29.483071 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:29.483137 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:29.483137 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:29.483137 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:29.483962 master-0 kubenswrapper[7620]: I0318 08:57:29.483924 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:30.482398 master-0 kubenswrapper[7620]: I0318 08:57:30.482332 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:30.482398 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:30.482398 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:30.482398 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:30.482765 master-0 kubenswrapper[7620]: I0318 08:57:30.482429 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:31.482406 master-0 kubenswrapper[7620]: I0318 08:57:31.482302 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:31.482406 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:31.482406 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:31.482406 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:31.482406 master-0 kubenswrapper[7620]: I0318 08:57:31.482396 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:32.481674 master-0 kubenswrapper[7620]: I0318 08:57:32.481636 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:32.481674 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:32.481674 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:32.481674 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:32.482030 master-0 kubenswrapper[7620]: I0318 08:57:32.481998 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:33.483035 master-0 kubenswrapper[7620]: I0318 08:57:33.482921 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:33.483035 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:33.483035 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:33.483035 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:33.483997 master-0 kubenswrapper[7620]: I0318 08:57:33.483043 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:34.481055 master-0 kubenswrapper[7620]: I0318 08:57:34.480975 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:34.481055 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:34.481055 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:34.481055 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:34.481055 master-0 kubenswrapper[7620]: I0318 08:57:34.481043 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:34.655706 master-0 kubenswrapper[7620]: E0318 08:57:34.655626 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:34.658284 master-0 kubenswrapper[7620]: E0318 08:57:34.658218 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:34.660136 master-0 kubenswrapper[7620]: E0318 08:57:34.660092 7620 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 08:57:34.660227 master-0 kubenswrapper[7620]: E0318 08:57:34.660151 7620 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" podUID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" containerName="kube-multus-additional-cni-plugins" Mar 18 08:57:35.482315 master-0 kubenswrapper[7620]: I0318 08:57:35.482200 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:35.482315 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:35.482315 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:35.482315 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:35.482898 master-0 kubenswrapper[7620]: I0318 08:57:35.482351 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:36.482272 master-0 kubenswrapper[7620]: I0318 08:57:36.482175 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:36.482272 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:57:36.482272 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:57:36.482272 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:57:36.483113 master-0 kubenswrapper[7620]: I0318 08:57:36.482319 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:36.483113 master-0 kubenswrapper[7620]: I0318 08:57:36.482419 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:57:36.483667 master-0 kubenswrapper[7620]: I0318 08:57:36.483616 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"504f021a6115c5b248227cad9be5358b605b45e875884611b5163b1993a0ac66"} pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" containerMessage="Container router failed startup probe, will be restarted" Mar 18 08:57:36.483759 master-0 kubenswrapper[7620]: I0318 08:57:36.483714 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" containerID="cri-o://504f021a6115c5b248227cad9be5358b605b45e875884611b5163b1993a0ac66" gracePeriod=3600 Mar 18 08:57:38.043110 master-0 kubenswrapper[7620]: I0318 08:57:38.043045 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vlc2m_c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301/kube-multus-additional-cni-plugins/0.log" Mar 18 08:57:38.044048 master-0 kubenswrapper[7620]: I0318 08:57:38.043133 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:38.149959 master-0 kubenswrapper[7620]: I0318 08:57:38.149846 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-ready\") pod \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " Mar 18 08:57:38.150293 master-0 kubenswrapper[7620]: I0318 08:57:38.149997 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg5lt\" (UniqueName: \"kubernetes.io/projected/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-kube-api-access-lg5lt\") pod \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " Mar 18 08:57:38.150293 master-0 kubenswrapper[7620]: I0318 08:57:38.150102 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-tuning-conf-dir\") pod \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " Mar 18 08:57:38.150293 master-0 kubenswrapper[7620]: I0318 08:57:38.150195 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-cni-sysctl-allowlist\") pod \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\" (UID: \"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301\") " Mar 18 08:57:38.150551 master-0 kubenswrapper[7620]: I0318 08:57:38.150280 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" (UID: "c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:57:38.150551 master-0 kubenswrapper[7620]: I0318 08:57:38.150463 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-ready" (OuterVolumeSpecName: "ready") pod "c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" (UID: "c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 08:57:38.150749 master-0 kubenswrapper[7620]: I0318 08:57:38.150714 7620 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:38.150749 master-0 kubenswrapper[7620]: I0318 08:57:38.150740 7620 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-ready\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:38.151042 master-0 kubenswrapper[7620]: I0318 08:57:38.150968 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" (UID: "c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:57:38.153776 master-0 kubenswrapper[7620]: I0318 08:57:38.153673 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-kube-api-access-lg5lt" (OuterVolumeSpecName: "kube-api-access-lg5lt") pod "c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" (UID: "c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301"). InnerVolumeSpecName "kube-api-access-lg5lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:57:38.252142 master-0 kubenswrapper[7620]: I0318 08:57:38.252099 7620 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:38.252142 master-0 kubenswrapper[7620]: I0318 08:57:38.252133 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg5lt\" (UniqueName: \"kubernetes.io/projected/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301-kube-api-access-lg5lt\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:38.259935 master-0 kubenswrapper[7620]: I0318 08:57:38.259870 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vlc2m_c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301/kube-multus-additional-cni-plugins/0.log" Mar 18 08:57:38.259935 master-0 kubenswrapper[7620]: I0318 08:57:38.259926 7620 generic.go:334] "Generic (PLEG): container finished" podID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" exitCode=137 Mar 18 08:57:38.260215 master-0 kubenswrapper[7620]: I0318 08:57:38.259956 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" event={"ID":"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301","Type":"ContainerDied","Data":"13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89"} Mar 18 08:57:38.260215 master-0 kubenswrapper[7620]: I0318 08:57:38.259985 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" event={"ID":"c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301","Type":"ContainerDied","Data":"1d6d3be968381e4a2c751988f41503339fd8e8b9a7db9e854b1829b80d4f3b1a"} Mar 18 08:57:38.260215 master-0 kubenswrapper[7620]: I0318 08:57:38.260006 7620 scope.go:117] "RemoveContainer" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" Mar 18 08:57:38.260215 master-0 kubenswrapper[7620]: I0318 08:57:38.260006 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vlc2m" Mar 18 08:57:38.283974 master-0 kubenswrapper[7620]: I0318 08:57:38.283924 7620 scope.go:117] "RemoveContainer" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" Mar 18 08:57:38.284629 master-0 kubenswrapper[7620]: E0318 08:57:38.284580 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89\": container with ID starting with 13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89 not found: ID does not exist" containerID="13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89" Mar 18 08:57:38.284706 master-0 kubenswrapper[7620]: I0318 08:57:38.284667 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89"} err="failed to get container status \"13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89\": rpc error: code = NotFound desc = could not find container \"13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89\": container with ID starting with 13138588541c4d309184d64b25bfc0c3f2525dc081e756b25b1ef3769ac44e89 not found: ID does not exist" Mar 18 08:57:38.287813 master-0 kubenswrapper[7620]: I0318 08:57:38.287752 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vlc2m"] Mar 18 08:57:38.292199 master-0 kubenswrapper[7620]: I0318 08:57:38.292140 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vlc2m"] Mar 18 08:57:40.237349 master-0 kubenswrapper[7620]: I0318 08:57:40.237172 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" path="/var/lib/kubelet/pods/c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301/volumes" Mar 18 08:57:46.499722 master-0 kubenswrapper[7620]: I0318 08:57:46.499661 7620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 08:57:46.500616 master-0 kubenswrapper[7620]: I0318 08:57:46.500057 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager" containerID="cri-o://bbabe017e89f6ea54b729f4482f01a624a5bb89f74c49b1b8e5588070c02358c" gracePeriod=30 Mar 18 08:57:46.500616 master-0 kubenswrapper[7620]: I0318 08:57:46.500248 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="cluster-policy-controller" containerID="cri-o://69596b626529595f36c9ff264c03689b43e4c44d0adc36ba6d7b5f545138ce9f" gracePeriod=30 Mar 18 08:57:46.500616 master-0 kubenswrapper[7620]: I0318 08:57:46.500203 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://c06f0e093df7004eb449f4d313d5c8483347978fe6cb23024b5393882adf8f4a" gracePeriod=30 Mar 18 08:57:46.500616 master-0 kubenswrapper[7620]: I0318 08:57:46.500433 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://fd10dceb0449c26d02e61b6f927511258c3ac41149782386de78284480c8fc4d" gracePeriod=30 Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504087 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: E0318 08:57:46.504409 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager-cert-syncer" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504425 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager-cert-syncer" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: E0318 08:57:46.504450 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="cluster-policy-controller" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504458 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="cluster-policy-controller" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: E0318 08:57:46.504487 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager-recovery-controller" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504496 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager-recovery-controller" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: E0318 08:57:46.504509 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504517 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: E0318 08:57:46.504528 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" containerName="kube-multus-additional-cni-plugins" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504536 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" containerName="kube-multus-additional-cni-plugins" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504665 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="cluster-policy-controller" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504687 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bbb9bd-cd6d-4f0a-9f39-09d5c473c301" containerName="kube-multus-additional-cni-plugins" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504703 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504714 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager-cert-syncer" Mar 18 08:57:46.506265 master-0 kubenswrapper[7620]: I0318 08:57:46.504729 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18b861b5b8c9ec3c738abc65d93de21" containerName="kube-controller-manager-recovery-controller" Mar 18 08:57:46.594817 master-0 kubenswrapper[7620]: I0318 08:57:46.594763 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c229b92d307e46237f6273edcc98d387\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:57:46.594817 master-0 kubenswrapper[7620]: I0318 08:57:46.594815 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c229b92d307e46237f6273edcc98d387\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:57:46.695896 master-0 kubenswrapper[7620]: I0318 08:57:46.695770 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c229b92d307e46237f6273edcc98d387\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:57:46.695896 master-0 kubenswrapper[7620]: I0318 08:57:46.695890 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c229b92d307e46237f6273edcc98d387\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:57:46.695896 master-0 kubenswrapper[7620]: I0318 08:57:46.695902 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c229b92d307e46237f6273edcc98d387\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:57:46.696386 master-0 kubenswrapper[7620]: I0318 08:57:46.695993 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c229b92d307e46237f6273edcc98d387\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:57:46.903511 master-0 kubenswrapper[7620]: I0318 08:57:46.903436 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f18b861b5b8c9ec3c738abc65d93de21/kube-controller-manager-cert-syncer/0.log" Mar 18 08:57:46.904353 master-0 kubenswrapper[7620]: I0318 08:57:46.904310 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:57:46.908604 master-0 kubenswrapper[7620]: I0318 08:57:46.908538 7620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="f18b861b5b8c9ec3c738abc65d93de21" podUID="c229b92d307e46237f6273edcc98d387" Mar 18 08:57:46.999624 master-0 kubenswrapper[7620]: I0318 08:57:46.999535 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-resource-dir\") pod \"f18b861b5b8c9ec3c738abc65d93de21\" (UID: \"f18b861b5b8c9ec3c738abc65d93de21\") " Mar 18 08:57:47.000000 master-0 kubenswrapper[7620]: I0318 08:57:46.999769 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-cert-dir\") pod \"f18b861b5b8c9ec3c738abc65d93de21\" (UID: \"f18b861b5b8c9ec3c738abc65d93de21\") " Mar 18 08:57:47.000000 master-0 kubenswrapper[7620]: I0318 08:57:46.999756 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f18b861b5b8c9ec3c738abc65d93de21" (UID: "f18b861b5b8c9ec3c738abc65d93de21"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:57:47.000000 master-0 kubenswrapper[7620]: I0318 08:57:46.999865 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f18b861b5b8c9ec3c738abc65d93de21" (UID: "f18b861b5b8c9ec3c738abc65d93de21"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:57:47.000135 master-0 kubenswrapper[7620]: I0318 08:57:47.000090 7620 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:47.000135 master-0 kubenswrapper[7620]: I0318 08:57:47.000108 7620 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f18b861b5b8c9ec3c738abc65d93de21-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:47.329101 master-0 kubenswrapper[7620]: I0318 08:57:47.329007 7620 generic.go:334] "Generic (PLEG): container finished" podID="62a1fcda-ce2f-4d14-ab37-10a21e30fc30" containerID="08088f866063b071982a4841fdee97faaded7e31cf8cc32d7754eb48aa28135c" exitCode=0 Mar 18 08:57:47.329511 master-0 kubenswrapper[7620]: I0318 08:57:47.329124 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62a1fcda-ce2f-4d14-ab37-10a21e30fc30","Type":"ContainerDied","Data":"08088f866063b071982a4841fdee97faaded7e31cf8cc32d7754eb48aa28135c"} Mar 18 08:57:47.332503 master-0 kubenswrapper[7620]: I0318 08:57:47.332418 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-2cf64_159a26f5-3cfc-4db2-88e9-bff5d8a613fc/multus-admission-controller/0.log" Mar 18 08:57:47.332503 master-0 kubenswrapper[7620]: I0318 08:57:47.332477 7620 generic.go:334] "Generic (PLEG): container finished" podID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerID="5b25a8863a8b00bc7ec87b8ae1e2369b0a538d5870570f98557275e350c88a96" exitCode=137 Mar 18 08:57:47.332707 master-0 kubenswrapper[7620]: I0318 08:57:47.332549 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" event={"ID":"159a26f5-3cfc-4db2-88e9-bff5d8a613fc","Type":"ContainerDied","Data":"5b25a8863a8b00bc7ec87b8ae1e2369b0a538d5870570f98557275e350c88a96"} Mar 18 08:57:47.335650 master-0 kubenswrapper[7620]: I0318 08:57:47.335604 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_f18b861b5b8c9ec3c738abc65d93de21/kube-controller-manager-cert-syncer/0.log" Mar 18 08:57:47.337047 master-0 kubenswrapper[7620]: I0318 08:57:47.337002 7620 generic.go:334] "Generic (PLEG): container finished" podID="f18b861b5b8c9ec3c738abc65d93de21" containerID="fd10dceb0449c26d02e61b6f927511258c3ac41149782386de78284480c8fc4d" exitCode=0 Mar 18 08:57:47.337132 master-0 kubenswrapper[7620]: I0318 08:57:47.337058 7620 generic.go:334] "Generic (PLEG): container finished" podID="f18b861b5b8c9ec3c738abc65d93de21" containerID="c06f0e093df7004eb449f4d313d5c8483347978fe6cb23024b5393882adf8f4a" exitCode=2 Mar 18 08:57:47.337132 master-0 kubenswrapper[7620]: I0318 08:57:47.337093 7620 generic.go:334] "Generic (PLEG): container finished" podID="f18b861b5b8c9ec3c738abc65d93de21" containerID="69596b626529595f36c9ff264c03689b43e4c44d0adc36ba6d7b5f545138ce9f" exitCode=0 Mar 18 08:57:47.337132 master-0 kubenswrapper[7620]: I0318 08:57:47.337115 7620 generic.go:334] "Generic (PLEG): container finished" podID="f18b861b5b8c9ec3c738abc65d93de21" containerID="bbabe017e89f6ea54b729f4482f01a624a5bb89f74c49b1b8e5588070c02358c" exitCode=0 Mar 18 08:57:47.337262 master-0 kubenswrapper[7620]: I0318 08:57:47.337169 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd69af88774e22c3d70940f7a0ea66641ee8b20b79b65a1fbb3869389de22e6" Mar 18 08:57:47.338100 master-0 kubenswrapper[7620]: I0318 08:57:47.337188 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:57:47.368189 master-0 kubenswrapper[7620]: I0318 08:57:47.368118 7620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="f18b861b5b8c9ec3c738abc65d93de21" podUID="c229b92d307e46237f6273edcc98d387" Mar 18 08:57:47.375389 master-0 kubenswrapper[7620]: I0318 08:57:47.375303 7620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="f18b861b5b8c9ec3c738abc65d93de21" podUID="c229b92d307e46237f6273edcc98d387" Mar 18 08:57:48.032130 master-0 kubenswrapper[7620]: I0318 08:57:48.032069 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-2cf64_159a26f5-3cfc-4db2-88e9-bff5d8a613fc/multus-admission-controller/0.log" Mar 18 08:57:48.033089 master-0 kubenswrapper[7620]: I0318 08:57:48.032166 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:57:48.120339 master-0 kubenswrapper[7620]: I0318 08:57:48.120262 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") pod \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " Mar 18 08:57:48.120589 master-0 kubenswrapper[7620]: I0318 08:57:48.120446 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hxtz\" (UniqueName: \"kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz\") pod \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\" (UID: \"159a26f5-3cfc-4db2-88e9-bff5d8a613fc\") " Mar 18 08:57:48.123972 master-0 kubenswrapper[7620]: I0318 08:57:48.123903 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz" (OuterVolumeSpecName: "kube-api-access-9hxtz") pod "159a26f5-3cfc-4db2-88e9-bff5d8a613fc" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc"). InnerVolumeSpecName "kube-api-access-9hxtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:57:48.124660 master-0 kubenswrapper[7620]: I0318 08:57:48.124578 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "159a26f5-3cfc-4db2-88e9-bff5d8a613fc" (UID: "159a26f5-3cfc-4db2-88e9-bff5d8a613fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:57:48.215145 master-0 kubenswrapper[7620]: I0318 08:57:48.212963 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 08:57:48.215145 master-0 kubenswrapper[7620]: E0318 08:57:48.213266 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerName="multus-admission-controller" Mar 18 08:57:48.215145 master-0 kubenswrapper[7620]: I0318 08:57:48.213281 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerName="multus-admission-controller" Mar 18 08:57:48.215145 master-0 kubenswrapper[7620]: E0318 08:57:48.213299 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerName="kube-rbac-proxy" Mar 18 08:57:48.215145 master-0 kubenswrapper[7620]: I0318 08:57:48.213305 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerName="kube-rbac-proxy" Mar 18 08:57:48.215145 master-0 kubenswrapper[7620]: I0318 08:57:48.213444 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerName="multus-admission-controller" Mar 18 08:57:48.215145 master-0 kubenswrapper[7620]: I0318 08:57:48.213465 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" containerName="kube-rbac-proxy" Mar 18 08:57:48.215145 master-0 kubenswrapper[7620]: I0318 08:57:48.213906 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.217821 master-0 kubenswrapper[7620]: I0318 08:57:48.217757 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-j24rr" Mar 18 08:57:48.218723 master-0 kubenswrapper[7620]: I0318 08:57:48.218001 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 08:57:48.222677 master-0 kubenswrapper[7620]: I0318 08:57:48.222607 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hxtz\" (UniqueName: \"kubernetes.io/projected/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-kube-api-access-9hxtz\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:48.222677 master-0 kubenswrapper[7620]: I0318 08:57:48.222668 7620 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/159a26f5-3cfc-4db2-88e9-bff5d8a613fc-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:48.246174 master-0 kubenswrapper[7620]: I0318 08:57:48.246112 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18b861b5b8c9ec3c738abc65d93de21" path="/var/lib/kubelet/pods/f18b861b5b8c9ec3c738abc65d93de21/volumes" Mar 18 08:57:48.246918 master-0 kubenswrapper[7620]: I0318 08:57:48.246877 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 08:57:48.324292 master-0 kubenswrapper[7620]: I0318 08:57:48.324197 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-var-lock\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.324292 master-0 kubenswrapper[7620]: I0318 08:57:48.324291 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.324663 master-0 kubenswrapper[7620]: I0318 08:57:48.324594 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68e6caf3-d855-483c-a37d-1010e522580e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.348328 master-0 kubenswrapper[7620]: I0318 08:57:48.348262 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-2cf64_159a26f5-3cfc-4db2-88e9-bff5d8a613fc/multus-admission-controller/0.log" Mar 18 08:57:48.348563 master-0 kubenswrapper[7620]: I0318 08:57:48.348370 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" event={"ID":"159a26f5-3cfc-4db2-88e9-bff5d8a613fc","Type":"ContainerDied","Data":"c7ad11be2f6e88d66c43f7a470d644f901fa421f8c0602a3500be8ddd4c38ee6"} Mar 18 08:57:48.348563 master-0 kubenswrapper[7620]: I0318 08:57:48.348452 7620 scope.go:117] "RemoveContainer" containerID="864587bb9e1c050127a06a72af052047508fc19256a176a3926da44e091eec45" Mar 18 08:57:48.348563 master-0 kubenswrapper[7620]: I0318 08:57:48.348389 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64" Mar 18 08:57:48.377151 master-0 kubenswrapper[7620]: I0318 08:57:48.367876 7620 scope.go:117] "RemoveContainer" containerID="5b25a8863a8b00bc7ec87b8ae1e2369b0a538d5870570f98557275e350c88a96" Mar 18 08:57:48.378995 master-0 kubenswrapper[7620]: I0318 08:57:48.378893 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64"] Mar 18 08:57:48.389546 master-0 kubenswrapper[7620]: I0318 08:57:48.389434 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-2cf64"] Mar 18 08:57:48.426073 master-0 kubenswrapper[7620]: I0318 08:57:48.425991 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.426073 master-0 kubenswrapper[7620]: I0318 08:57:48.426083 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68e6caf3-d855-483c-a37d-1010e522580e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.426381 master-0 kubenswrapper[7620]: I0318 08:57:48.426131 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-var-lock\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.426381 master-0 kubenswrapper[7620]: I0318 08:57:48.426203 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-var-lock\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.426381 master-0 kubenswrapper[7620]: I0318 08:57:48.426241 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.454954 master-0 kubenswrapper[7620]: I0318 08:57:48.453803 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68e6caf3-d855-483c-a37d-1010e522580e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.555121 master-0 kubenswrapper[7620]: I0318 08:57:48.554965 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:57:48.696895 master-0 kubenswrapper[7620]: I0318 08:57:48.695378 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:48.831534 master-0 kubenswrapper[7620]: I0318 08:57:48.831358 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kubelet-dir\") pod \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " Mar 18 08:57:48.831534 master-0 kubenswrapper[7620]: I0318 08:57:48.831446 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-var-lock\") pod \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " Mar 18 08:57:48.831534 master-0 kubenswrapper[7620]: I0318 08:57:48.831440 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "62a1fcda-ce2f-4d14-ab37-10a21e30fc30" (UID: "62a1fcda-ce2f-4d14-ab37-10a21e30fc30"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:57:48.831534 master-0 kubenswrapper[7620]: I0318 08:57:48.831475 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kube-api-access\") pod \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\" (UID: \"62a1fcda-ce2f-4d14-ab37-10a21e30fc30\") " Mar 18 08:57:48.832034 master-0 kubenswrapper[7620]: I0318 08:57:48.831547 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-var-lock" (OuterVolumeSpecName: "var-lock") pod "62a1fcda-ce2f-4d14-ab37-10a21e30fc30" (UID: "62a1fcda-ce2f-4d14-ab37-10a21e30fc30"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:57:48.832034 master-0 kubenswrapper[7620]: I0318 08:57:48.831710 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:48.832034 master-0 kubenswrapper[7620]: I0318 08:57:48.831723 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:48.835930 master-0 kubenswrapper[7620]: I0318 08:57:48.835879 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "62a1fcda-ce2f-4d14-ab37-10a21e30fc30" (UID: "62a1fcda-ce2f-4d14-ab37-10a21e30fc30"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:57:48.933625 master-0 kubenswrapper[7620]: I0318 08:57:48.933524 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62a1fcda-ce2f-4d14-ab37-10a21e30fc30-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:57:48.989836 master-0 kubenswrapper[7620]: I0318 08:57:48.989781 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 08:57:49.362360 master-0 kubenswrapper[7620]: I0318 08:57:49.362283 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62a1fcda-ce2f-4d14-ab37-10a21e30fc30","Type":"ContainerDied","Data":"13017e08077deeefc07c7fe44f54a64a8b6b49173dc26b6f0df3026587c8b3ff"} Mar 18 08:57:49.362360 master-0 kubenswrapper[7620]: I0318 08:57:49.362346 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13017e08077deeefc07c7fe44f54a64a8b6b49173dc26b6f0df3026587c8b3ff" Mar 18 08:57:49.363086 master-0 kubenswrapper[7620]: I0318 08:57:49.362403 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 08:57:49.363729 master-0 kubenswrapper[7620]: I0318 08:57:49.363684 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"68e6caf3-d855-483c-a37d-1010e522580e","Type":"ContainerStarted","Data":"7b806cd2a676889da85133d43a0662ba6109f449c0721d651ad0d9e9c85f3a3b"} Mar 18 08:57:50.239014 master-0 kubenswrapper[7620]: I0318 08:57:50.238953 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159a26f5-3cfc-4db2-88e9-bff5d8a613fc" path="/var/lib/kubelet/pods/159a26f5-3cfc-4db2-88e9-bff5d8a613fc/volumes" Mar 18 08:57:50.373485 master-0 kubenswrapper[7620]: I0318 08:57:50.373380 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"68e6caf3-d855-483c-a37d-1010e522580e","Type":"ContainerStarted","Data":"f16aa514802c2b1e949ae0cfb51e228ea684c95d020ba4b520a18da905fe2dcf"} Mar 18 08:57:50.411426 master-0 kubenswrapper[7620]: I0318 08:57:50.411220 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.411189098 podStartE2EDuration="2.411189098s" podCreationTimestamp="2026-03-18 08:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:57:50.402539932 +0000 UTC m=+534.397321754" watchObservedRunningTime="2026-03-18 08:57:50.411189098 +0000 UTC m=+534.405970890" Mar 18 08:57:56.417284 master-0 kubenswrapper[7620]: I0318 08:57:56.417220 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 08:57:56.419213 master-0 kubenswrapper[7620]: I0318 08:57:56.419087 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-4-master-0" podUID="68e6caf3-d855-483c-a37d-1010e522580e" containerName="installer" containerID="cri-o://f16aa514802c2b1e949ae0cfb51e228ea684c95d020ba4b520a18da905fe2dcf" gracePeriod=30 Mar 18 08:57:56.770127 master-0 kubenswrapper[7620]: I0318 08:57:56.770055 7620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 08:57:56.770571 master-0 kubenswrapper[7620]: I0318 08:57:56.770501 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458" gracePeriod=30 Mar 18 08:57:56.770710 master-0 kubenswrapper[7620]: I0318 08:57:56.770588 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0" gracePeriod=30 Mar 18 08:57:56.770710 master-0 kubenswrapper[7620]: I0318 08:57:56.770669 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba" gracePeriod=30 Mar 18 08:57:56.770889 master-0 kubenswrapper[7620]: I0318 08:57:56.770640 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7" gracePeriod=30 Mar 18 08:57:56.770889 master-0 kubenswrapper[7620]: I0318 08:57:56.770712 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a" gracePeriod=30 Mar 18 08:57:56.779333 master-0 kubenswrapper[7620]: I0318 08:57:56.779282 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 08:57:56.779723 master-0 kubenswrapper[7620]: E0318 08:57:56.779693 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 08:57:56.779723 master-0 kubenswrapper[7620]: I0318 08:57:56.779722 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 08:57:56.779833 master-0 kubenswrapper[7620]: E0318 08:57:56.779748 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 08:57:56.779833 master-0 kubenswrapper[7620]: I0318 08:57:56.779761 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 08:57:56.779833 master-0 kubenswrapper[7620]: E0318 08:57:56.779776 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 08:57:56.779833 master-0 kubenswrapper[7620]: I0318 08:57:56.779789 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 08:57:56.779833 master-0 kubenswrapper[7620]: E0318 08:57:56.779808 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a1fcda-ce2f-4d14-ab37-10a21e30fc30" containerName="installer" Mar 18 08:57:56.779833 master-0 kubenswrapper[7620]: I0318 08:57:56.779820 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a1fcda-ce2f-4d14-ab37-10a21e30fc30" containerName="installer" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: E0318 08:57:56.779842 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: I0318 08:57:56.779882 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: E0318 08:57:56.779898 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: I0318 08:57:56.779911 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: E0318 08:57:56.779937 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: I0318 08:57:56.779948 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: E0318 08:57:56.779972 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: I0318 08:57:56.779986 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: E0318 08:57:56.780008 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: I0318 08:57:56.780019 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: I0318 08:57:56.780210 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: I0318 08:57:56.780236 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 08:57:56.780254 master-0 kubenswrapper[7620]: I0318 08:57:56.780255 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 08:57:56.795983 master-0 kubenswrapper[7620]: I0318 08:57:56.780277 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a1fcda-ce2f-4d14-ab37-10a21e30fc30" containerName="installer" Mar 18 08:57:56.795983 master-0 kubenswrapper[7620]: I0318 08:57:56.780294 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 08:57:56.795983 master-0 kubenswrapper[7620]: I0318 08:57:56.780311 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 08:57:56.882243 master-0 kubenswrapper[7620]: I0318 08:57:56.882124 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.882480 master-0 kubenswrapper[7620]: I0318 08:57:56.882287 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.882480 master-0 kubenswrapper[7620]: I0318 08:57:56.882393 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.882480 master-0 kubenswrapper[7620]: I0318 08:57:56.882446 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.882632 master-0 kubenswrapper[7620]: I0318 08:57:56.882481 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.882840 master-0 kubenswrapper[7620]: I0318 08:57:56.882796 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.984338 master-0 kubenswrapper[7620]: I0318 08:57:56.984220 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.984338 master-0 kubenswrapper[7620]: I0318 08:57:56.984328 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984367 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984397 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984511 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984567 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984692 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984756 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984798 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984839 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984918 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:56.985162 master-0 kubenswrapper[7620]: I0318 08:57:56.984960 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:57:57.233380 master-0 kubenswrapper[7620]: I0318 08:57:57.233276 7620 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Mar 18 08:57:57.233633 master-0 kubenswrapper[7620]: I0318 08:57:57.233385 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Mar 18 08:57:57.442751 master-0 kubenswrapper[7620]: I0318 08:57:57.442666 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 08:57:57.445096 master-0 kubenswrapper[7620]: I0318 08:57:57.445043 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 08:57:57.448252 master-0 kubenswrapper[7620]: I0318 08:57:57.448184 7620 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0" exitCode=2 Mar 18 08:57:57.448252 master-0 kubenswrapper[7620]: I0318 08:57:57.448239 7620 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a" exitCode=0 Mar 18 08:57:57.448407 master-0 kubenswrapper[7620]: I0318 08:57:57.448272 7620 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7" exitCode=2 Mar 18 08:58:02.223278 master-0 kubenswrapper[7620]: I0318 08:58:02.223229 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:02.248322 master-0 kubenswrapper[7620]: I0318 08:58:02.248265 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 08:58:02.248322 master-0 kubenswrapper[7620]: I0318 08:58:02.248312 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 08:58:11.574792 master-0 kubenswrapper[7620]: I0318 08:58:11.574702 7620 generic.go:334] "Generic (PLEG): container finished" podID="005a0b4c-8e2d-4483-98e9-55badf7099c5" containerID="83c1b5b71c6b991cce706c7d71cc023db485e610df2dae94288a380e76fcfca1" exitCode=0 Mar 18 08:58:11.575450 master-0 kubenswrapper[7620]: I0318 08:58:11.574800 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"005a0b4c-8e2d-4483-98e9-55badf7099c5","Type":"ContainerDied","Data":"83c1b5b71c6b991cce706c7d71cc023db485e610df2dae94288a380e76fcfca1"} Mar 18 08:58:12.975733 master-0 kubenswrapper[7620]: I0318 08:58:12.975653 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 08:58:13.073171 master-0 kubenswrapper[7620]: I0318 08:58:13.073059 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-kubelet-dir\") pod \"005a0b4c-8e2d-4483-98e9-55badf7099c5\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " Mar 18 08:58:13.073406 master-0 kubenswrapper[7620]: I0318 08:58:13.073222 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "005a0b4c-8e2d-4483-98e9-55badf7099c5" (UID: "005a0b4c-8e2d-4483-98e9-55badf7099c5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:13.073559 master-0 kubenswrapper[7620]: I0318 08:58:13.073458 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/005a0b4c-8e2d-4483-98e9-55badf7099c5-kube-api-access\") pod \"005a0b4c-8e2d-4483-98e9-55badf7099c5\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " Mar 18 08:58:13.073834 master-0 kubenswrapper[7620]: I0318 08:58:13.073793 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-var-lock\") pod \"005a0b4c-8e2d-4483-98e9-55badf7099c5\" (UID: \"005a0b4c-8e2d-4483-98e9-55badf7099c5\") " Mar 18 08:58:13.073989 master-0 kubenswrapper[7620]: I0318 08:58:13.073916 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-var-lock" (OuterVolumeSpecName: "var-lock") pod "005a0b4c-8e2d-4483-98e9-55badf7099c5" (UID: "005a0b4c-8e2d-4483-98e9-55badf7099c5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:13.074394 master-0 kubenswrapper[7620]: I0318 08:58:13.074339 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:13.074394 master-0 kubenswrapper[7620]: I0318 08:58:13.074384 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/005a0b4c-8e2d-4483-98e9-55badf7099c5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:13.077481 master-0 kubenswrapper[7620]: I0318 08:58:13.077420 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005a0b4c-8e2d-4483-98e9-55badf7099c5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "005a0b4c-8e2d-4483-98e9-55badf7099c5" (UID: "005a0b4c-8e2d-4483-98e9-55badf7099c5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:13.176501 master-0 kubenswrapper[7620]: I0318 08:58:13.176326 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/005a0b4c-8e2d-4483-98e9-55badf7099c5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:13.595507 master-0 kubenswrapper[7620]: I0318 08:58:13.595408 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 08:58:13.598232 master-0 kubenswrapper[7620]: I0318 08:58:13.598046 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"005a0b4c-8e2d-4483-98e9-55badf7099c5","Type":"ContainerDied","Data":"6ceebd5fc2e20325f9aee4b93a902553c4a60d97de2a44d71188013bb71ab91c"} Mar 18 08:58:13.598232 master-0 kubenswrapper[7620]: I0318 08:58:13.598177 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ceebd5fc2e20325f9aee4b93a902553c4a60d97de2a44d71188013bb71ab91c" Mar 18 08:58:13.601731 master-0 kubenswrapper[7620]: I0318 08:58:13.601671 7620 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="db516bae26a48292c2104c2ecfafa39292fbbc58aaf43ed786161ac8d6801cb8" exitCode=1 Mar 18 08:58:13.601731 master-0 kubenswrapper[7620]: I0318 08:58:13.601721 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"db516bae26a48292c2104c2ecfafa39292fbbc58aaf43ed786161ac8d6801cb8"} Mar 18 08:58:13.602026 master-0 kubenswrapper[7620]: I0318 08:58:13.601775 7620 scope.go:117] "RemoveContainer" containerID="56c1813fc6a99c6be68188fda55c9aa95683f9493caa43861ba04693d0ba89d2" Mar 18 08:58:13.603097 master-0 kubenswrapper[7620]: I0318 08:58:13.603022 7620 scope.go:117] "RemoveContainer" containerID="db516bae26a48292c2104c2ecfafa39292fbbc58aaf43ed786161ac8d6801cb8" Mar 18 08:58:13.604015 master-0 kubenswrapper[7620]: E0318 08:58:13.603925 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" Mar 18 08:58:14.326814 master-0 kubenswrapper[7620]: E0318 08:58:14.326590 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:20.661107 master-0 kubenswrapper[7620]: I0318 08:58:20.661022 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_68e6caf3-d855-483c-a37d-1010e522580e/installer/0.log" Mar 18 08:58:20.662021 master-0 kubenswrapper[7620]: I0318 08:58:20.661113 7620 generic.go:334] "Generic (PLEG): container finished" podID="68e6caf3-d855-483c-a37d-1010e522580e" containerID="f16aa514802c2b1e949ae0cfb51e228ea684c95d020ba4b520a18da905fe2dcf" exitCode=1 Mar 18 08:58:20.662021 master-0 kubenswrapper[7620]: I0318 08:58:20.661179 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"68e6caf3-d855-483c-a37d-1010e522580e","Type":"ContainerDied","Data":"f16aa514802c2b1e949ae0cfb51e228ea684c95d020ba4b520a18da905fe2dcf"} Mar 18 08:58:20.662021 master-0 kubenswrapper[7620]: I0318 08:58:20.661256 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"68e6caf3-d855-483c-a37d-1010e522580e","Type":"ContainerDied","Data":"7b806cd2a676889da85133d43a0662ba6109f449c0721d651ad0d9e9c85f3a3b"} Mar 18 08:58:20.662021 master-0 kubenswrapper[7620]: I0318 08:58:20.661279 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b806cd2a676889da85133d43a0662ba6109f449c0721d651ad0d9e9c85f3a3b" Mar 18 08:58:20.693756 master-0 kubenswrapper[7620]: I0318 08:58:20.693676 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_68e6caf3-d855-483c-a37d-1010e522580e/installer/0.log" Mar 18 08:58:20.694014 master-0 kubenswrapper[7620]: I0318 08:58:20.693787 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:20.811778 master-0 kubenswrapper[7620]: I0318 08:58:20.811696 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68e6caf3-d855-483c-a37d-1010e522580e-kube-api-access\") pod \"68e6caf3-d855-483c-a37d-1010e522580e\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " Mar 18 08:58:20.812182 master-0 kubenswrapper[7620]: I0318 08:58:20.811963 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-kubelet-dir\") pod \"68e6caf3-d855-483c-a37d-1010e522580e\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " Mar 18 08:58:20.812182 master-0 kubenswrapper[7620]: I0318 08:58:20.812069 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-var-lock\") pod \"68e6caf3-d855-483c-a37d-1010e522580e\" (UID: \"68e6caf3-d855-483c-a37d-1010e522580e\") " Mar 18 08:58:20.812182 master-0 kubenswrapper[7620]: I0318 08:58:20.812154 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "68e6caf3-d855-483c-a37d-1010e522580e" (UID: "68e6caf3-d855-483c-a37d-1010e522580e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:20.812403 master-0 kubenswrapper[7620]: I0318 08:58:20.812317 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-var-lock" (OuterVolumeSpecName: "var-lock") pod "68e6caf3-d855-483c-a37d-1010e522580e" (UID: "68e6caf3-d855-483c-a37d-1010e522580e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:20.812617 master-0 kubenswrapper[7620]: I0318 08:58:20.812560 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:20.812617 master-0 kubenswrapper[7620]: I0318 08:58:20.812601 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/68e6caf3-d855-483c-a37d-1010e522580e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:20.816600 master-0 kubenswrapper[7620]: I0318 08:58:20.816523 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e6caf3-d855-483c-a37d-1010e522580e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "68e6caf3-d855-483c-a37d-1010e522580e" (UID: "68e6caf3-d855-483c-a37d-1010e522580e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:20.914064 master-0 kubenswrapper[7620]: I0318 08:58:20.913921 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68e6caf3-d855-483c-a37d-1010e522580e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:21.671595 master-0 kubenswrapper[7620]: I0318 08:58:21.671507 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:22.681762 master-0 kubenswrapper[7620]: I0318 08:58:22.681707 7620 generic.go:334] "Generic (PLEG): container finished" podID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerID="504f021a6115c5b248227cad9be5358b605b45e875884611b5163b1993a0ac66" exitCode=0 Mar 18 08:58:22.682220 master-0 kubenswrapper[7620]: I0318 08:58:22.681762 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerDied","Data":"504f021a6115c5b248227cad9be5358b605b45e875884611b5163b1993a0ac66"} Mar 18 08:58:22.682220 master-0 kubenswrapper[7620]: I0318 08:58:22.681813 7620 scope.go:117] "RemoveContainer" containerID="aebf5a50f9283c726e790a6d4456896088c910f33d1ce0e919e46d41b14e21ad" Mar 18 08:58:23.692876 master-0 kubenswrapper[7620]: I0318 08:58:23.692769 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerStarted","Data":"4a7dbd9949adb4dd8d63e9de3470c7186002c65ba78caccdd813c4fb43556282"} Mar 18 08:58:24.327930 master-0 kubenswrapper[7620]: E0318 08:58:24.327776 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:24.479796 master-0 kubenswrapper[7620]: I0318 08:58:24.479672 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:58:24.484059 master-0 kubenswrapper[7620]: I0318 08:58:24.483979 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:24.484059 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:24.484059 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:24.484059 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:24.484059 master-0 kubenswrapper[7620]: I0318 08:58:24.484055 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:25.482608 master-0 kubenswrapper[7620]: I0318 08:58:25.482452 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:25.482608 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:25.482608 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:25.482608 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:25.483971 master-0 kubenswrapper[7620]: I0318 08:58:25.482609 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:26.479131 master-0 kubenswrapper[7620]: I0318 08:58:26.479038 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 08:58:26.483439 master-0 kubenswrapper[7620]: I0318 08:58:26.483381 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:26.483439 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:26.483439 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:26.483439 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:26.484168 master-0 kubenswrapper[7620]: I0318 08:58:26.483457 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:27.225642 master-0 kubenswrapper[7620]: I0318 08:58:27.225420 7620 scope.go:117] "RemoveContainer" containerID="db516bae26a48292c2104c2ecfafa39292fbbc58aaf43ed786161ac8d6801cb8" Mar 18 08:58:27.387129 master-0 kubenswrapper[7620]: I0318 08:58:27.387054 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 08:58:27.388555 master-0 kubenswrapper[7620]: I0318 08:58:27.388493 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 08:58:27.389776 master-0 kubenswrapper[7620]: I0318 08:58:27.389710 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 08:58:27.390340 master-0 kubenswrapper[7620]: I0318 08:58:27.390297 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 08:58:27.392278 master-0 kubenswrapper[7620]: I0318 08:58:27.392224 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:58:27.434436 master-0 kubenswrapper[7620]: I0318 08:58:27.434317 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:27.434436 master-0 kubenswrapper[7620]: I0318 08:58:27.434393 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:27.434436 master-0 kubenswrapper[7620]: I0318 08:58:27.434412 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:27.434436 master-0 kubenswrapper[7620]: I0318 08:58:27.434481 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:27.435202 master-0 kubenswrapper[7620]: I0318 08:58:27.434507 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:27.435202 master-0 kubenswrapper[7620]: I0318 08:58:27.434517 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:27.435202 master-0 kubenswrapper[7620]: I0318 08:58:27.434554 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:27.435202 master-0 kubenswrapper[7620]: I0318 08:58:27.434587 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:27.435202 master-0 kubenswrapper[7620]: I0318 08:58:27.434677 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:27.435202 master-0 kubenswrapper[7620]: I0318 08:58:27.434592 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:27.435202 master-0 kubenswrapper[7620]: I0318 08:58:27.434609 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:27.435202 master-0 kubenswrapper[7620]: I0318 08:58:27.434790 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:27.435761 master-0 kubenswrapper[7620]: I0318 08:58:27.435300 7620 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:27.435761 master-0 kubenswrapper[7620]: I0318 08:58:27.435363 7620 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:27.435761 master-0 kubenswrapper[7620]: I0318 08:58:27.435388 7620 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:27.435761 master-0 kubenswrapper[7620]: I0318 08:58:27.435407 7620 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:27.435761 master-0 kubenswrapper[7620]: I0318 08:58:27.435426 7620 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:27.435761 master-0 kubenswrapper[7620]: I0318 08:58:27.435444 7620 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:27.482415 master-0 kubenswrapper[7620]: I0318 08:58:27.482241 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:27.482415 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:27.482415 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:27.482415 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:27.482415 master-0 kubenswrapper[7620]: I0318 08:58:27.482333 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:27.728797 master-0 kubenswrapper[7620]: I0318 08:58:27.728677 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"965c96bceffdf0d2dfe6811ad54d4d08d2afc86948c8800b709c2385cc93d84e"} Mar 18 08:58:27.732171 master-0 kubenswrapper[7620]: I0318 08:58:27.732115 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 08:58:27.733937 master-0 kubenswrapper[7620]: I0318 08:58:27.733844 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 08:58:27.735092 master-0 kubenswrapper[7620]: I0318 08:58:27.735025 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 08:58:27.735718 master-0 kubenswrapper[7620]: I0318 08:58:27.735688 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 08:58:27.737908 master-0 kubenswrapper[7620]: I0318 08:58:27.737569 7620 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba" exitCode=137 Mar 18 08:58:27.737908 master-0 kubenswrapper[7620]: I0318 08:58:27.737627 7620 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458" exitCode=137 Mar 18 08:58:27.737908 master-0 kubenswrapper[7620]: I0318 08:58:27.737682 7620 scope.go:117] "RemoveContainer" containerID="eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0" Mar 18 08:58:27.737908 master-0 kubenswrapper[7620]: I0318 08:58:27.737753 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:58:27.761973 master-0 kubenswrapper[7620]: I0318 08:58:27.761903 7620 scope.go:117] "RemoveContainer" containerID="d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a" Mar 18 08:58:27.789891 master-0 kubenswrapper[7620]: I0318 08:58:27.789207 7620 scope.go:117] "RemoveContainer" containerID="d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7" Mar 18 08:58:27.813884 master-0 kubenswrapper[7620]: I0318 08:58:27.813782 7620 scope.go:117] "RemoveContainer" containerID="0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba" Mar 18 08:58:27.833289 master-0 kubenswrapper[7620]: I0318 08:58:27.833220 7620 scope.go:117] "RemoveContainer" containerID="31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458" Mar 18 08:58:27.854283 master-0 kubenswrapper[7620]: I0318 08:58:27.854213 7620 scope.go:117] "RemoveContainer" containerID="8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119" Mar 18 08:58:27.880784 master-0 kubenswrapper[7620]: I0318 08:58:27.880714 7620 scope.go:117] "RemoveContainer" containerID="037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d" Mar 18 08:58:27.904995 master-0 kubenswrapper[7620]: I0318 08:58:27.904273 7620 scope.go:117] "RemoveContainer" containerID="a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85" Mar 18 08:58:27.944076 master-0 kubenswrapper[7620]: I0318 08:58:27.943976 7620 scope.go:117] "RemoveContainer" containerID="eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0" Mar 18 08:58:27.946010 master-0 kubenswrapper[7620]: E0318 08:58:27.945962 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0\": container with ID starting with eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0 not found: ID does not exist" containerID="eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0" Mar 18 08:58:27.946312 master-0 kubenswrapper[7620]: I0318 08:58:27.946241 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0"} err="failed to get container status \"eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0\": rpc error: code = NotFound desc = could not find container \"eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0\": container with ID starting with eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0 not found: ID does not exist" Mar 18 08:58:27.946523 master-0 kubenswrapper[7620]: I0318 08:58:27.946483 7620 scope.go:117] "RemoveContainer" containerID="d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a" Mar 18 08:58:27.947590 master-0 kubenswrapper[7620]: E0318 08:58:27.947481 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a\": container with ID starting with d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a not found: ID does not exist" containerID="d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a" Mar 18 08:58:27.947760 master-0 kubenswrapper[7620]: I0318 08:58:27.947606 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a"} err="failed to get container status \"d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a\": rpc error: code = NotFound desc = could not find container \"d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a\": container with ID starting with d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a not found: ID does not exist" Mar 18 08:58:27.947760 master-0 kubenswrapper[7620]: I0318 08:58:27.947665 7620 scope.go:117] "RemoveContainer" containerID="d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7" Mar 18 08:58:27.948387 master-0 kubenswrapper[7620]: E0318 08:58:27.948322 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7\": container with ID starting with d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7 not found: ID does not exist" containerID="d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7" Mar 18 08:58:27.948606 master-0 kubenswrapper[7620]: I0318 08:58:27.948564 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7"} err="failed to get container status \"d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7\": rpc error: code = NotFound desc = could not find container \"d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7\": container with ID starting with d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7 not found: ID does not exist" Mar 18 08:58:27.948776 master-0 kubenswrapper[7620]: I0318 08:58:27.948749 7620 scope.go:117] "RemoveContainer" containerID="0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba" Mar 18 08:58:27.949667 master-0 kubenswrapper[7620]: E0318 08:58:27.949618 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba\": container with ID starting with 0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba not found: ID does not exist" containerID="0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba" Mar 18 08:58:27.949884 master-0 kubenswrapper[7620]: I0318 08:58:27.949671 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba"} err="failed to get container status \"0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba\": rpc error: code = NotFound desc = could not find container \"0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba\": container with ID starting with 0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba not found: ID does not exist" Mar 18 08:58:27.949884 master-0 kubenswrapper[7620]: I0318 08:58:27.949705 7620 scope.go:117] "RemoveContainer" containerID="31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458" Mar 18 08:58:27.950254 master-0 kubenswrapper[7620]: E0318 08:58:27.950211 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458\": container with ID starting with 31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458 not found: ID does not exist" containerID="31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458" Mar 18 08:58:27.950430 master-0 kubenswrapper[7620]: I0318 08:58:27.950389 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458"} err="failed to get container status \"31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458\": rpc error: code = NotFound desc = could not find container \"31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458\": container with ID starting with 31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458 not found: ID does not exist" Mar 18 08:58:27.950573 master-0 kubenswrapper[7620]: I0318 08:58:27.950548 7620 scope.go:117] "RemoveContainer" containerID="8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119" Mar 18 08:58:27.951412 master-0 kubenswrapper[7620]: E0318 08:58:27.951369 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119\": container with ID starting with 8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119 not found: ID does not exist" containerID="8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119" Mar 18 08:58:27.951622 master-0 kubenswrapper[7620]: I0318 08:58:27.951408 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119"} err="failed to get container status \"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119\": rpc error: code = NotFound desc = could not find container \"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119\": container with ID starting with 8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119 not found: ID does not exist" Mar 18 08:58:27.951622 master-0 kubenswrapper[7620]: I0318 08:58:27.951433 7620 scope.go:117] "RemoveContainer" containerID="037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d" Mar 18 08:58:27.952121 master-0 kubenswrapper[7620]: E0318 08:58:27.952085 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d\": container with ID starting with 037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d not found: ID does not exist" containerID="037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d" Mar 18 08:58:27.952295 master-0 kubenswrapper[7620]: I0318 08:58:27.952124 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d"} err="failed to get container status \"037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d\": rpc error: code = NotFound desc = could not find container \"037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d\": container with ID starting with 037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d not found: ID does not exist" Mar 18 08:58:27.952295 master-0 kubenswrapper[7620]: I0318 08:58:27.952153 7620 scope.go:117] "RemoveContainer" containerID="a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85" Mar 18 08:58:27.952724 master-0 kubenswrapper[7620]: E0318 08:58:27.952678 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85\": container with ID starting with a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85 not found: ID does not exist" containerID="a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85" Mar 18 08:58:27.953087 master-0 kubenswrapper[7620]: I0318 08:58:27.953040 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85"} err="failed to get container status \"a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85\": rpc error: code = NotFound desc = could not find container \"a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85\": container with ID starting with a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85 not found: ID does not exist" Mar 18 08:58:27.953270 master-0 kubenswrapper[7620]: I0318 08:58:27.953243 7620 scope.go:117] "RemoveContainer" containerID="eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0" Mar 18 08:58:27.954906 master-0 kubenswrapper[7620]: I0318 08:58:27.953766 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0"} err="failed to get container status \"eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0\": rpc error: code = NotFound desc = could not find container \"eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0\": container with ID starting with eaf5314d4daedb04b0810419a85a92fa1d11aaa49f4468aef088b7bf78ab09b0 not found: ID does not exist" Mar 18 08:58:27.954906 master-0 kubenswrapper[7620]: I0318 08:58:27.953805 7620 scope.go:117] "RemoveContainer" containerID="d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a" Mar 18 08:58:27.954906 master-0 kubenswrapper[7620]: I0318 08:58:27.954831 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a"} err="failed to get container status \"d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a\": rpc error: code = NotFound desc = could not find container \"d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a\": container with ID starting with d6e4d0848336920c4c2367e35c0f8a2ff7a531835a43cba2e2e819f3599cb82a not found: ID does not exist" Mar 18 08:58:27.954906 master-0 kubenswrapper[7620]: I0318 08:58:27.954910 7620 scope.go:117] "RemoveContainer" containerID="d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7" Mar 18 08:58:27.955999 master-0 kubenswrapper[7620]: I0318 08:58:27.955936 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7"} err="failed to get container status \"d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7\": rpc error: code = NotFound desc = could not find container \"d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7\": container with ID starting with d55f32628d36fef2091dd025587240b8ca743b0ba115f45a8672152f872db9f7 not found: ID does not exist" Mar 18 08:58:27.956071 master-0 kubenswrapper[7620]: I0318 08:58:27.956004 7620 scope.go:117] "RemoveContainer" containerID="0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba" Mar 18 08:58:27.956693 master-0 kubenswrapper[7620]: I0318 08:58:27.956516 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba"} err="failed to get container status \"0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba\": rpc error: code = NotFound desc = could not find container \"0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba\": container with ID starting with 0a3f0d54aecb3ed557f31b2d8cbb3a5d2841e1a3c7dd74488f821bea7649c2ba not found: ID does not exist" Mar 18 08:58:27.956693 master-0 kubenswrapper[7620]: I0318 08:58:27.956566 7620 scope.go:117] "RemoveContainer" containerID="31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458" Mar 18 08:58:27.957242 master-0 kubenswrapper[7620]: I0318 08:58:27.957161 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458"} err="failed to get container status \"31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458\": rpc error: code = NotFound desc = could not find container \"31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458\": container with ID starting with 31e89bf6ae59ee7805717c8450d63270c0f1e3491a3c420217df22187017f458 not found: ID does not exist" Mar 18 08:58:27.957242 master-0 kubenswrapper[7620]: I0318 08:58:27.957213 7620 scope.go:117] "RemoveContainer" containerID="8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119" Mar 18 08:58:27.957645 master-0 kubenswrapper[7620]: I0318 08:58:27.957587 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119"} err="failed to get container status \"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119\": rpc error: code = NotFound desc = could not find container \"8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119\": container with ID starting with 8fcf2dc21bde9860c2fe58020881a99530b56c8c984671257fbc4e8d33dd7119 not found: ID does not exist" Mar 18 08:58:27.957754 master-0 kubenswrapper[7620]: I0318 08:58:27.957645 7620 scope.go:117] "RemoveContainer" containerID="037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d" Mar 18 08:58:27.958142 master-0 kubenswrapper[7620]: I0318 08:58:27.958075 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d"} err="failed to get container status \"037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d\": rpc error: code = NotFound desc = could not find container \"037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d\": container with ID starting with 037e687423ee2fc5069c12833ee3a78d87a572548a03166d976a62f7a2c74f3d not found: ID does not exist" Mar 18 08:58:27.958142 master-0 kubenswrapper[7620]: I0318 08:58:27.958121 7620 scope.go:117] "RemoveContainer" containerID="a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85" Mar 18 08:58:27.958719 master-0 kubenswrapper[7620]: I0318 08:58:27.958674 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85"} err="failed to get container status \"a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85\": rpc error: code = NotFound desc = could not find container \"a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85\": container with ID starting with a763fdcf6c7d6b7c7725834ac9d564c543461cafe37f0ea82574ad101ee4eb85 not found: ID does not exist" Mar 18 08:58:28.235720 master-0 kubenswrapper[7620]: I0318 08:58:28.235619 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 18 08:58:28.483016 master-0 kubenswrapper[7620]: I0318 08:58:28.482889 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:28.483016 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:28.483016 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:28.483016 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:28.483016 master-0 kubenswrapper[7620]: I0318 08:58:28.482990 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:29.484041 master-0 kubenswrapper[7620]: I0318 08:58:29.483924 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:29.484041 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:29.484041 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:29.484041 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:29.484041 master-0 kubenswrapper[7620]: I0318 08:58:29.484023 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:30.482692 master-0 kubenswrapper[7620]: I0318 08:58:30.482547 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:30.482692 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:30.482692 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:30.482692 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:30.482692 master-0 kubenswrapper[7620]: I0318 08:58:30.482666 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:30.781848 master-0 kubenswrapper[7620]: E0318 08:58:30.781675 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189de3c816df6007 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:57:56.770545671 +0000 UTC m=+540.765327463,LastTimestamp:2026-03-18 08:57:56.770545671 +0000 UTC m=+540.765327463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:58:31.481939 master-0 kubenswrapper[7620]: I0318 08:58:31.481801 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:31.481939 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:31.481939 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:31.481939 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:31.482468 master-0 kubenswrapper[7620]: I0318 08:58:31.482054 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:32.483476 master-0 kubenswrapper[7620]: I0318 08:58:32.483159 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:32.483476 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:32.483476 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:32.483476 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:32.484556 master-0 kubenswrapper[7620]: I0318 08:58:32.483474 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:33.483072 master-0 kubenswrapper[7620]: I0318 08:58:33.482931 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:33.483072 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:33.483072 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:33.483072 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:33.483072 master-0 kubenswrapper[7620]: I0318 08:58:33.483034 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:34.328421 master-0 kubenswrapper[7620]: E0318 08:58:34.328288 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:34.482614 master-0 kubenswrapper[7620]: I0318 08:58:34.482492 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:34.482614 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:34.482614 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:34.482614 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:34.482614 master-0 kubenswrapper[7620]: I0318 08:58:34.482573 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:35.483165 master-0 kubenswrapper[7620]: I0318 08:58:35.482945 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:35.483165 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:35.483165 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:35.483165 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:35.483165 master-0 kubenswrapper[7620]: I0318 08:58:35.483079 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:35.814850 master-0 kubenswrapper[7620]: I0318 08:58:35.814761 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/3.log" Mar 18 08:58:35.815667 master-0 kubenswrapper[7620]: I0318 08:58:35.815619 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/2.log" Mar 18 08:58:35.816450 master-0 kubenswrapper[7620]: I0318 08:58:35.816394 7620 generic.go:334] "Generic (PLEG): container finished" podID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" containerID="1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11" exitCode=1 Mar 18 08:58:35.816555 master-0 kubenswrapper[7620]: I0318 08:58:35.816451 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerDied","Data":"1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11"} Mar 18 08:58:35.816555 master-0 kubenswrapper[7620]: I0318 08:58:35.816515 7620 scope.go:117] "RemoveContainer" containerID="fad64d39172d17151c921b86e24888209413b262345fa2cee0651c733f8df0a1" Mar 18 08:58:35.817591 master-0 kubenswrapper[7620]: I0318 08:58:35.817543 7620 scope.go:117] "RemoveContainer" containerID="1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11" Mar 18 08:58:35.818192 master-0 kubenswrapper[7620]: E0318 08:58:35.818134 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 08:58:36.251106 master-0 kubenswrapper[7620]: E0318 08:58:36.251025 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:36.252024 master-0 kubenswrapper[7620]: I0318 08:58:36.251988 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:36.284663 master-0 kubenswrapper[7620]: W0318 08:58:36.284585 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc229b92d307e46237f6273edcc98d387.slice/crio-86fa4125270c3c49a4a19e870a994342691ddd1c81df5fef0113e7b2940e9561 WatchSource:0}: Error finding container 86fa4125270c3c49a4a19e870a994342691ddd1c81df5fef0113e7b2940e9561: Status 404 returned error can't find the container with id 86fa4125270c3c49a4a19e870a994342691ddd1c81df5fef0113e7b2940e9561 Mar 18 08:58:36.482870 master-0 kubenswrapper[7620]: I0318 08:58:36.482799 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:36.482870 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:36.482870 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:36.482870 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:36.483206 master-0 kubenswrapper[7620]: I0318 08:58:36.482938 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:36.828258 master-0 kubenswrapper[7620]: I0318 08:58:36.828199 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/3.log" Mar 18 08:58:36.829665 master-0 kubenswrapper[7620]: I0318 08:58:36.829608 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"25aa8e7a5fe1cd4cb308d45095cfc8ec891476603ff1037e70498c15fb355808"} Mar 18 08:58:36.829665 master-0 kubenswrapper[7620]: I0318 08:58:36.829656 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"86fa4125270c3c49a4a19e870a994342691ddd1c81df5fef0113e7b2940e9561"} Mar 18 08:58:37.224657 master-0 kubenswrapper[7620]: I0318 08:58:37.224568 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:58:37.253741 master-0 kubenswrapper[7620]: I0318 08:58:37.253683 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 08:58:37.253949 master-0 kubenswrapper[7620]: I0318 08:58:37.253751 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 08:58:37.481698 master-0 kubenswrapper[7620]: I0318 08:58:37.481647 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:37.481698 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:37.481698 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:37.481698 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:37.481972 master-0 kubenswrapper[7620]: I0318 08:58:37.481739 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:37.844926 master-0 kubenswrapper[7620]: I0318 08:58:37.844838 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"9e36a51bcf12ae7db2a94f2fd56063ee6085dd854239e6802000e5e8cda9a85b"} Mar 18 08:58:37.844926 master-0 kubenswrapper[7620]: I0318 08:58:37.844916 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"5c751dbb03b0e78f3ed7a9a2441228c32321443d29de48b1bf17ef0e83072bd3"} Mar 18 08:58:37.844926 master-0 kubenswrapper[7620]: I0318 08:58:37.844932 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"c7747c2abe864b22fe548817bbd5d5507f3440eb5ca9988572c184f2a9991de4"} Mar 18 08:58:37.845576 master-0 kubenswrapper[7620]: I0318 08:58:37.845232 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 08:58:37.845576 master-0 kubenswrapper[7620]: I0318 08:58:37.845252 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 08:58:38.482126 master-0 kubenswrapper[7620]: I0318 08:58:38.482073 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:38.482126 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:38.482126 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:38.482126 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:38.482126 master-0 kubenswrapper[7620]: I0318 08:58:38.482145 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:39.481990 master-0 kubenswrapper[7620]: I0318 08:58:39.481806 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:39.481990 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:39.481990 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:39.481990 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:39.483246 master-0 kubenswrapper[7620]: I0318 08:58:39.481999 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:40.482467 master-0 kubenswrapper[7620]: I0318 08:58:40.482380 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:40.482467 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:40.482467 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:40.482467 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:40.483218 master-0 kubenswrapper[7620]: I0318 08:58:40.482470 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:41.482507 master-0 kubenswrapper[7620]: I0318 08:58:41.482402 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:41.482507 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:41.482507 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:41.482507 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:41.483059 master-0 kubenswrapper[7620]: I0318 08:58:41.482503 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:42.482149 master-0 kubenswrapper[7620]: I0318 08:58:42.482008 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:42.482149 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:42.482149 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:42.482149 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:42.483272 master-0 kubenswrapper[7620]: I0318 08:58:42.482169 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:43.481816 master-0 kubenswrapper[7620]: I0318 08:58:43.481734 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:43.481816 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:43.481816 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:43.481816 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:43.482252 master-0 kubenswrapper[7620]: I0318 08:58:43.481892 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:43.909992 master-0 kubenswrapper[7620]: I0318 08:58:43.909920 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-n5vqx_16d633c5-e0aa-4fb6-83e0-a2e976334406/approver/1.log" Mar 18 08:58:43.911288 master-0 kubenswrapper[7620]: I0318 08:58:43.911146 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-n5vqx_16d633c5-e0aa-4fb6-83e0-a2e976334406/approver/0.log" Mar 18 08:58:43.912228 master-0 kubenswrapper[7620]: I0318 08:58:43.911673 7620 generic.go:334] "Generic (PLEG): container finished" podID="16d633c5-e0aa-4fb6-83e0-a2e976334406" containerID="fc1e7d5ba53f64b05a03f60a1cf7fc1f9339f4be3d65c717cb0541eb9f2e16d3" exitCode=1 Mar 18 08:58:43.912228 master-0 kubenswrapper[7620]: I0318 08:58:43.911723 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-n5vqx" event={"ID":"16d633c5-e0aa-4fb6-83e0-a2e976334406","Type":"ContainerDied","Data":"fc1e7d5ba53f64b05a03f60a1cf7fc1f9339f4be3d65c717cb0541eb9f2e16d3"} Mar 18 08:58:43.912228 master-0 kubenswrapper[7620]: I0318 08:58:43.911774 7620 scope.go:117] "RemoveContainer" containerID="9d4723f8591cc64ff0653aec9e9efb152a03ef27364e5787d1d3d8ff7d6020e4" Mar 18 08:58:43.912986 master-0 kubenswrapper[7620]: I0318 08:58:43.912914 7620 scope.go:117] "RemoveContainer" containerID="fc1e7d5ba53f64b05a03f60a1cf7fc1f9339f4be3d65c717cb0541eb9f2e16d3" Mar 18 08:58:43.913344 master-0 kubenswrapper[7620]: E0318 08:58:43.913288 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-n5vqx_openshift-network-node-identity(16d633c5-e0aa-4fb6-83e0-a2e976334406)\"" pod="openshift-network-node-identity/network-node-identity-n5vqx" podUID="16d633c5-e0aa-4fb6-83e0-a2e976334406" Mar 18 08:58:44.329612 master-0 kubenswrapper[7620]: E0318 08:58:44.329443 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:44.483569 master-0 kubenswrapper[7620]: I0318 08:58:44.483477 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:44.483569 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:44.483569 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:44.483569 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:44.484123 master-0 kubenswrapper[7620]: I0318 08:58:44.483605 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:44.925212 master-0 kubenswrapper[7620]: I0318 08:58:44.925117 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-n5vqx_16d633c5-e0aa-4fb6-83e0-a2e976334406/approver/1.log" Mar 18 08:58:45.482283 master-0 kubenswrapper[7620]: I0318 08:58:45.482187 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:45.482283 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:45.482283 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:45.482283 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:45.483040 master-0 kubenswrapper[7620]: I0318 08:58:45.482305 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:46.225241 master-0 kubenswrapper[7620]: I0318 08:58:46.225131 7620 scope.go:117] "RemoveContainer" containerID="1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11" Mar 18 08:58:46.226273 master-0 kubenswrapper[7620]: E0318 08:58:46.225561 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 08:58:46.252415 master-0 kubenswrapper[7620]: I0318 08:58:46.252323 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:46.252415 master-0 kubenswrapper[7620]: I0318 08:58:46.252414 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:46.252708 master-0 kubenswrapper[7620]: I0318 08:58:46.252575 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:46.252708 master-0 kubenswrapper[7620]: I0318 08:58:46.252618 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:46.261168 master-0 kubenswrapper[7620]: I0318 08:58:46.261084 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:46.483173 master-0 kubenswrapper[7620]: I0318 08:58:46.483011 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:46.483173 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:46.483173 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:46.483173 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:46.483173 master-0 kubenswrapper[7620]: I0318 08:58:46.483103 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:47.482515 master-0 kubenswrapper[7620]: I0318 08:58:47.482390 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:47.482515 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:47.482515 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:47.482515 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:47.483508 master-0 kubenswrapper[7620]: I0318 08:58:47.482530 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:48.482015 master-0 kubenswrapper[7620]: I0318 08:58:48.481908 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:48.482015 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:48.482015 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:48.482015 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:48.482015 master-0 kubenswrapper[7620]: I0318 08:58:48.481992 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:49.253070 master-0 kubenswrapper[7620]: I0318 08:58:49.252813 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:58:49.253070 master-0 kubenswrapper[7620]: I0318 08:58:49.253035 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:49.483706 master-0 kubenswrapper[7620]: I0318 08:58:49.483536 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:49.483706 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:49.483706 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:49.483706 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:49.484163 master-0 kubenswrapper[7620]: I0318 08:58:49.483734 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:50.481760 master-0 kubenswrapper[7620]: I0318 08:58:50.481617 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:50.481760 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:50.481760 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:50.481760 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:50.482806 master-0 kubenswrapper[7620]: I0318 08:58:50.481765 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:51.483931 master-0 kubenswrapper[7620]: I0318 08:58:51.483789 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:51.483931 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:51.483931 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:51.483931 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:51.485435 master-0 kubenswrapper[7620]: I0318 08:58:51.483953 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:52.482262 master-0 kubenswrapper[7620]: I0318 08:58:52.482186 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:52.482262 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:52.482262 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:52.482262 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:52.482598 master-0 kubenswrapper[7620]: I0318 08:58:52.482302 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:53.481829 master-0 kubenswrapper[7620]: I0318 08:58:53.481744 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:53.481829 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:53.481829 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:53.481829 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:53.482617 master-0 kubenswrapper[7620]: I0318 08:58:53.481846 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:54.329980 master-0 kubenswrapper[7620]: E0318 08:58:54.329925 7620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:54.330257 master-0 kubenswrapper[7620]: I0318 08:58:54.330241 7620 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 08:58:54.482339 master-0 kubenswrapper[7620]: I0318 08:58:54.482155 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:54.482339 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:54.482339 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:54.482339 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:54.482339 master-0 kubenswrapper[7620]: I0318 08:58:54.482240 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:55.483020 master-0 kubenswrapper[7620]: I0318 08:58:55.482931 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:55.483020 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:55.483020 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:55.483020 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:55.484120 master-0 kubenswrapper[7620]: I0318 08:58:55.483058 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:56.224984 master-0 kubenswrapper[7620]: I0318 08:58:56.224810 7620 scope.go:117] "RemoveContainer" containerID="fc1e7d5ba53f64b05a03f60a1cf7fc1f9339f4be3d65c717cb0541eb9f2e16d3" Mar 18 08:58:56.259742 master-0 kubenswrapper[7620]: I0318 08:58:56.259651 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:58:56.482262 master-0 kubenswrapper[7620]: I0318 08:58:56.482071 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:56.482262 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:56.482262 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:56.482262 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:56.482262 master-0 kubenswrapper[7620]: I0318 08:58:56.482169 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:57.027901 master-0 kubenswrapper[7620]: I0318 08:58:57.027784 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-n5vqx_16d633c5-e0aa-4fb6-83e0-a2e976334406/approver/1.log" Mar 18 08:58:57.029478 master-0 kubenswrapper[7620]: I0318 08:58:57.028259 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-n5vqx" event={"ID":"16d633c5-e0aa-4fb6-83e0-a2e976334406","Type":"ContainerStarted","Data":"318e17c1acc6fadcfc70a60bf658708bd201f7c1fe6e00d7f84e5149f124b38b"} Mar 18 08:58:57.353096 master-0 kubenswrapper[7620]: I0318 08:58:57.352921 7620 scope.go:117] "RemoveContainer" containerID="bbabe017e89f6ea54b729f4482f01a624a5bb89f74c49b1b8e5588070c02358c" Mar 18 08:58:57.374600 master-0 kubenswrapper[7620]: I0318 08:58:57.374508 7620 scope.go:117] "RemoveContainer" containerID="69596b626529595f36c9ff264c03689b43e4c44d0adc36ba6d7b5f545138ce9f" Mar 18 08:58:57.395633 master-0 kubenswrapper[7620]: I0318 08:58:57.395582 7620 scope.go:117] "RemoveContainer" containerID="fd10dceb0449c26d02e61b6f927511258c3ac41149782386de78284480c8fc4d" Mar 18 08:58:57.414748 master-0 kubenswrapper[7620]: I0318 08:58:57.414686 7620 scope.go:117] "RemoveContainer" containerID="c06f0e093df7004eb449f4d313d5c8483347978fe6cb23024b5393882adf8f4a" Mar 18 08:58:57.482233 master-0 kubenswrapper[7620]: I0318 08:58:57.482148 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:57.482233 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:57.482233 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:57.482233 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:57.482499 master-0 kubenswrapper[7620]: I0318 08:58:57.482260 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:58.224921 master-0 kubenswrapper[7620]: I0318 08:58:58.224771 7620 scope.go:117] "RemoveContainer" containerID="1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11" Mar 18 08:58:58.226093 master-0 kubenswrapper[7620]: E0318 08:58:58.225078 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 08:58:58.481804 master-0 kubenswrapper[7620]: I0318 08:58:58.481598 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:58.481804 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:58.481804 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:58.481804 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:58.481804 master-0 kubenswrapper[7620]: I0318 08:58:58.481690 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:59.252466 master-0 kubenswrapper[7620]: I0318 08:58:59.252380 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:58:59.252466 master-0 kubenswrapper[7620]: I0318 08:58:59.252462 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:59.482558 master-0 kubenswrapper[7620]: I0318 08:58:59.482441 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:59.482558 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:58:59.482558 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:58:59.482558 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:58:59.483110 master-0 kubenswrapper[7620]: I0318 08:58:59.482576 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:00.482158 master-0 kubenswrapper[7620]: I0318 08:59:00.482070 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:00.482158 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:00.482158 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:00.482158 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:00.482968 master-0 kubenswrapper[7620]: I0318 08:59:00.482210 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:01.769967 master-0 kubenswrapper[7620]: I0318 08:59:01.769891 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:01.769967 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:01.769967 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:01.769967 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:01.771130 master-0 kubenswrapper[7620]: I0318 08:59:01.769976 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:02.482689 master-0 kubenswrapper[7620]: I0318 08:59:02.482545 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:02.482689 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:02.482689 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:02.482689 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:02.482689 master-0 kubenswrapper[7620]: I0318 08:59:02.482675 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:03.482768 master-0 kubenswrapper[7620]: I0318 08:59:03.482687 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:03.482768 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:03.482768 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:03.482768 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:03.483842 master-0 kubenswrapper[7620]: I0318 08:59:03.482784 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:04.331039 master-0 kubenswrapper[7620]: E0318 08:59:04.330938 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 08:59:04.482940 master-0 kubenswrapper[7620]: I0318 08:59:04.482822 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:04.482940 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:04.482940 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:04.482940 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:04.483846 master-0 kubenswrapper[7620]: I0318 08:59:04.482962 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:04.785605 master-0 kubenswrapper[7620]: E0318 08:59:04.785327 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de3cc023718d7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:58:13.603842263 +0000 UTC m=+557.598624045,LastTimestamp:2026-03-18 08:58:13.603842263 +0000 UTC m=+557.598624045,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:59:05.483660 master-0 kubenswrapper[7620]: I0318 08:59:05.483545 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:05.483660 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:05.483660 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:05.483660 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:05.485247 master-0 kubenswrapper[7620]: I0318 08:59:05.483695 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:06.233729 master-0 kubenswrapper[7620]: I0318 08:59:06.233592 7620 status_manager.go:851] "Failed to get status for pod" podUID="c229b92d307e46237f6273edcc98d387" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 18 08:59:06.482500 master-0 kubenswrapper[7620]: I0318 08:59:06.482411 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:06.482500 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:06.482500 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:06.482500 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:06.482500 master-0 kubenswrapper[7620]: I0318 08:59:06.482500 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:07.482107 master-0 kubenswrapper[7620]: I0318 08:59:07.481996 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:07.482107 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:07.482107 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:07.482107 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:07.483286 master-0 kubenswrapper[7620]: I0318 08:59:07.482119 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:08.276980 master-0 kubenswrapper[7620]: I0318 08:59:08.276895 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:38496->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 18 08:59:08.277404 master-0 kubenswrapper[7620]: I0318 08:59:08.277002 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:38496->127.0.0.1:10357: read: connection reset by peer" Mar 18 08:59:08.277404 master-0 kubenswrapper[7620]: I0318 08:59:08.277086 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:59:08.482375 master-0 kubenswrapper[7620]: I0318 08:59:08.482273 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:08.482375 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:08.482375 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:08.482375 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:08.483540 master-0 kubenswrapper[7620]: I0318 08:59:08.482403 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:09.138378 master-0 kubenswrapper[7620]: I0318 08:59:09.138136 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/0.log" Mar 18 08:59:09.138807 master-0 kubenswrapper[7620]: I0318 08:59:09.138627 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="c7747c2abe864b22fe548817bbd5d5507f3440eb5ca9988572c184f2a9991de4" exitCode=255 Mar 18 08:59:09.138807 master-0 kubenswrapper[7620]: I0318 08:59:09.138690 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerDied","Data":"c7747c2abe864b22fe548817bbd5d5507f3440eb5ca9988572c184f2a9991de4"} Mar 18 08:59:09.483566 master-0 kubenswrapper[7620]: I0318 08:59:09.483477 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:09.483566 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:09.483566 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:09.483566 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:09.484542 master-0 kubenswrapper[7620]: I0318 08:59:09.483585 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:10.224547 master-0 kubenswrapper[7620]: I0318 08:59:10.224414 7620 scope.go:117] "RemoveContainer" containerID="1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11" Mar 18 08:59:10.225072 master-0 kubenswrapper[7620]: E0318 08:59:10.224924 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 08:59:10.482572 master-0 kubenswrapper[7620]: I0318 08:59:10.482394 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:10.482572 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:10.482572 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:10.482572 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:10.482572 master-0 kubenswrapper[7620]: I0318 08:59:10.482507 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:11.257501 master-0 kubenswrapper[7620]: E0318 08:59:11.257352 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 08:59:11.258302 master-0 kubenswrapper[7620]: I0318 08:59:11.258277 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:59:11.281168 master-0 kubenswrapper[7620]: W0318 08:59:11.280995 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094204df314fe45bd5af12ca1b4622bb.slice/crio-32faaf71e97855a1cb6aa3bd19d52c689531407fd638810606403df329a94675 WatchSource:0}: Error finding container 32faaf71e97855a1cb6aa3bd19d52c689531407fd638810606403df329a94675: Status 404 returned error can't find the container with id 32faaf71e97855a1cb6aa3bd19d52c689531407fd638810606403df329a94675 Mar 18 08:59:11.482103 master-0 kubenswrapper[7620]: I0318 08:59:11.482006 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:11.482103 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:11.482103 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:11.482103 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:11.482444 master-0 kubenswrapper[7620]: I0318 08:59:11.482132 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:11.848654 master-0 kubenswrapper[7620]: E0318 08:59:11.848408 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:59:11.849108 master-0 kubenswrapper[7620]: I0318 08:59:11.848982 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"c7747c2abe864b22fe548817bbd5d5507f3440eb5ca9988572c184f2a9991de4"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 08:59:11.849492 master-0 kubenswrapper[7620]: I0318 08:59:11.849177 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" containerID="cri-o://c7747c2abe864b22fe548817bbd5d5507f3440eb5ca9988572c184f2a9991de4" gracePeriod=30 Mar 18 08:59:12.163421 master-0 kubenswrapper[7620]: I0318 08:59:12.163356 7620 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="651e82575789e45afdb3cab141808fa3f37d722ac54ebc209361597ebc814204" exitCode=0 Mar 18 08:59:12.163604 master-0 kubenswrapper[7620]: I0318 08:59:12.163454 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"651e82575789e45afdb3cab141808fa3f37d722ac54ebc209361597ebc814204"} Mar 18 08:59:12.163604 master-0 kubenswrapper[7620]: I0318 08:59:12.163516 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"32faaf71e97855a1cb6aa3bd19d52c689531407fd638810606403df329a94675"} Mar 18 08:59:12.163885 master-0 kubenswrapper[7620]: I0318 08:59:12.163839 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 08:59:12.163885 master-0 kubenswrapper[7620]: I0318 08:59:12.163883 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 08:59:12.166029 master-0 kubenswrapper[7620]: I0318 08:59:12.165995 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/0.log" Mar 18 08:59:12.166483 master-0 kubenswrapper[7620]: I0318 08:59:12.166447 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"45cfdaa8068d2e50c19b669b85c533c5500c1a18e949ad70a5ede8a514a84af0"} Mar 18 08:59:12.166966 master-0 kubenswrapper[7620]: I0318 08:59:12.166920 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 08:59:12.167017 master-0 kubenswrapper[7620]: I0318 08:59:12.166969 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 08:59:12.483369 master-0 kubenswrapper[7620]: I0318 08:59:12.483201 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:12.483369 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:12.483369 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:12.483369 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:12.484371 master-0 kubenswrapper[7620]: I0318 08:59:12.483411 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:13.482532 master-0 kubenswrapper[7620]: I0318 08:59:13.482451 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:13.482532 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:13.482532 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:13.482532 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:13.482933 master-0 kubenswrapper[7620]: I0318 08:59:13.482604 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:14.482031 master-0 kubenswrapper[7620]: I0318 08:59:14.481923 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:14.482031 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:14.482031 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:14.482031 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:14.482911 master-0 kubenswrapper[7620]: I0318 08:59:14.482050 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:14.542924 master-0 kubenswrapper[7620]: E0318 08:59:14.535150 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 08:59:15.481942 master-0 kubenswrapper[7620]: I0318 08:59:15.481880 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:15.481942 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:15.481942 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:15.481942 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:15.483350 master-0 kubenswrapper[7620]: I0318 08:59:15.481974 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:16.252566 master-0 kubenswrapper[7620]: I0318 08:59:16.252481 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:59:16.252566 master-0 kubenswrapper[7620]: I0318 08:59:16.252548 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:59:16.482123 master-0 kubenswrapper[7620]: I0318 08:59:16.482029 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:16.482123 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:16.482123 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:16.482123 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:16.482943 master-0 kubenswrapper[7620]: I0318 08:59:16.482126 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:17.482417 master-0 kubenswrapper[7620]: I0318 08:59:17.482326 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:17.482417 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:17.482417 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:17.482417 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:17.483402 master-0 kubenswrapper[7620]: I0318 08:59:17.482473 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:18.482103 master-0 kubenswrapper[7620]: I0318 08:59:18.481930 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:18.482103 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:18.482103 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:18.482103 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:18.483155 master-0 kubenswrapper[7620]: I0318 08:59:18.482110 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:19.253688 master-0 kubenswrapper[7620]: I0318 08:59:19.253605 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:59:19.253904 master-0 kubenswrapper[7620]: I0318 08:59:19.253692 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:19.483046 master-0 kubenswrapper[7620]: I0318 08:59:19.482955 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:19.483046 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:19.483046 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:19.483046 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:19.484067 master-0 kubenswrapper[7620]: I0318 08:59:19.483065 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:20.484697 master-0 kubenswrapper[7620]: I0318 08:59:20.484592 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:20.484697 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:20.484697 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:20.484697 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:20.484697 master-0 kubenswrapper[7620]: I0318 08:59:20.484684 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:21.481788 master-0 kubenswrapper[7620]: I0318 08:59:21.481699 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:21.481788 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:21.481788 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:21.481788 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:21.482244 master-0 kubenswrapper[7620]: I0318 08:59:21.481813 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:22.225159 master-0 kubenswrapper[7620]: I0318 08:59:22.225088 7620 scope.go:117] "RemoveContainer" containerID="1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11" Mar 18 08:59:22.483153 master-0 kubenswrapper[7620]: I0318 08:59:22.482951 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:22.483153 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:22.483153 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:22.483153 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:22.483153 master-0 kubenswrapper[7620]: I0318 08:59:22.483071 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:23.258431 master-0 kubenswrapper[7620]: I0318 08:59:23.258275 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/3.log" Mar 18 08:59:23.259754 master-0 kubenswrapper[7620]: I0318 08:59:23.259059 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerStarted","Data":"fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f"} Mar 18 08:59:23.483590 master-0 kubenswrapper[7620]: I0318 08:59:23.483446 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:23.483590 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:23.483590 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:23.483590 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:23.483590 master-0 kubenswrapper[7620]: I0318 08:59:23.483577 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:24.482613 master-0 kubenswrapper[7620]: I0318 08:59:24.482528 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:24.482613 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:24.482613 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:24.482613 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:24.483797 master-0 kubenswrapper[7620]: I0318 08:59:24.482634 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:24.936728 master-0 kubenswrapper[7620]: E0318 08:59:24.936578 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 08:59:25.482127 master-0 kubenswrapper[7620]: I0318 08:59:25.481965 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:25.482127 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:25.482127 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:25.482127 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:25.482127 master-0 kubenswrapper[7620]: I0318 08:59:25.482109 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:26.482816 master-0 kubenswrapper[7620]: I0318 08:59:26.482698 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:26.482816 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:26.482816 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:26.482816 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:26.482816 master-0 kubenswrapper[7620]: I0318 08:59:26.482798 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:27.482049 master-0 kubenswrapper[7620]: I0318 08:59:27.481919 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:27.482049 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:27.482049 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:27.482049 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:27.482746 master-0 kubenswrapper[7620]: I0318 08:59:27.482054 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:28.481359 master-0 kubenswrapper[7620]: I0318 08:59:28.481233 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:28.481359 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:28.481359 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:28.481359 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:28.481359 master-0 kubenswrapper[7620]: I0318 08:59:28.481330 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:29.253402 master-0 kubenswrapper[7620]: I0318 08:59:29.253260 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:59:29.253402 master-0 kubenswrapper[7620]: I0318 08:59:29.253373 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:29.482463 master-0 kubenswrapper[7620]: I0318 08:59:29.482387 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:29.482463 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:29.482463 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:29.482463 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:29.483565 master-0 kubenswrapper[7620]: I0318 08:59:29.482463 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:30.482616 master-0 kubenswrapper[7620]: I0318 08:59:30.482524 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:30.482616 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:30.482616 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:30.482616 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:30.483414 master-0 kubenswrapper[7620]: I0318 08:59:30.482626 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:31.481903 master-0 kubenswrapper[7620]: I0318 08:59:31.481806 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:31.481903 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:31.481903 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:31.481903 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:31.482398 master-0 kubenswrapper[7620]: I0318 08:59:31.481942 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:32.482172 master-0 kubenswrapper[7620]: I0318 08:59:32.482022 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:32.482172 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:32.482172 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:32.482172 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:32.483040 master-0 kubenswrapper[7620]: I0318 08:59:32.482184 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:33.482183 master-0 kubenswrapper[7620]: I0318 08:59:33.481987 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:33.482183 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:33.482183 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:33.482183 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:33.482183 master-0 kubenswrapper[7620]: I0318 08:59:33.482172 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:34.241320 master-0 kubenswrapper[7620]: E0318 08:59:34.241217 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:59:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:59:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:59:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:59:24Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:34.482521 master-0 kubenswrapper[7620]: I0318 08:59:34.482393 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:34.482521 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:34.482521 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:34.482521 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:34.482521 master-0 kubenswrapper[7620]: I0318 08:59:34.482502 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:35.482353 master-0 kubenswrapper[7620]: I0318 08:59:35.482256 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:35.482353 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:35.482353 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:35.482353 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:35.483094 master-0 kubenswrapper[7620]: I0318 08:59:35.482372 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:35.738098 master-0 kubenswrapper[7620]: E0318 08:59:35.737973 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 08:59:36.481690 master-0 kubenswrapper[7620]: I0318 08:59:36.481574 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:36.481690 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:36.481690 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:36.481690 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:36.482357 master-0 kubenswrapper[7620]: I0318 08:59:36.481688 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:37.482764 master-0 kubenswrapper[7620]: I0318 08:59:37.482684 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:37.482764 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:37.482764 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:37.482764 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:37.483409 master-0 kubenswrapper[7620]: I0318 08:59:37.482771 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:38.482221 master-0 kubenswrapper[7620]: I0318 08:59:38.482131 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:38.482221 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:38.482221 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:38.482221 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:38.482680 master-0 kubenswrapper[7620]: I0318 08:59:38.482254 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:38.790068 master-0 kubenswrapper[7620]: E0318 08:59:38.789821 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{router-default-7dcf5569b5-8sbgd.189de3a774130dde openshift-ingress 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7dcf5569b5-8sbgd,UID:ad4cf9b2-4e66-4921-a30c-7b659bff06ab,APIVersion:v1,ResourceVersion:11418,FieldPath:spec.containers{router},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:55:36.60029283 +0000 UTC m=+400.595074582,LastTimestamp:2026-03-18 08:58:22.620657119 +0000 UTC m=+566.615438911,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:59:39.253888 master-0 kubenswrapper[7620]: I0318 08:59:39.253756 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:59:39.254132 master-0 kubenswrapper[7620]: I0318 08:59:39.253893 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:39.254132 master-0 kubenswrapper[7620]: I0318 08:59:39.253979 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:59:39.482436 master-0 kubenswrapper[7620]: I0318 08:59:39.482354 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:39.482436 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:39.482436 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:39.482436 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:39.482991 master-0 kubenswrapper[7620]: I0318 08:59:39.482478 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:40.481082 master-0 kubenswrapper[7620]: I0318 08:59:40.480988 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:40.481082 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:40.481082 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:40.481082 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:40.481082 master-0 kubenswrapper[7620]: I0318 08:59:40.481072 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:41.482157 master-0 kubenswrapper[7620]: I0318 08:59:41.481848 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:41.482157 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:41.482157 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:41.482157 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:41.482157 master-0 kubenswrapper[7620]: I0318 08:59:41.482144 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:42.482695 master-0 kubenswrapper[7620]: I0318 08:59:42.482612 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:42.482695 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:42.482695 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:42.482695 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:42.483732 master-0 kubenswrapper[7620]: I0318 08:59:42.482703 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:43.420368 master-0 kubenswrapper[7620]: I0318 08:59:43.420324 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/1.log" Mar 18 08:59:43.422805 master-0 kubenswrapper[7620]: I0318 08:59:43.422775 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/0.log" Mar 18 08:59:43.423717 master-0 kubenswrapper[7620]: I0318 08:59:43.423651 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="45cfdaa8068d2e50c19b669b85c533c5500c1a18e949ad70a5ede8a514a84af0" exitCode=255 Mar 18 08:59:43.423994 master-0 kubenswrapper[7620]: I0318 08:59:43.423949 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerDied","Data":"45cfdaa8068d2e50c19b669b85c533c5500c1a18e949ad70a5ede8a514a84af0"} Mar 18 08:59:43.424195 master-0 kubenswrapper[7620]: I0318 08:59:43.424169 7620 scope.go:117] "RemoveContainer" containerID="c7747c2abe864b22fe548817bbd5d5507f3440eb5ca9988572c184f2a9991de4" Mar 18 08:59:43.482665 master-0 kubenswrapper[7620]: I0318 08:59:43.482615 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:43.482665 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:43.482665 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:43.482665 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:43.483516 master-0 kubenswrapper[7620]: I0318 08:59:43.483480 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:44.242643 master-0 kubenswrapper[7620]: E0318 08:59:44.242567 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:44.436090 master-0 kubenswrapper[7620]: I0318 08:59:44.436008 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/1.log" Mar 18 08:59:44.482909 master-0 kubenswrapper[7620]: I0318 08:59:44.482760 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:44.482909 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:44.482909 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:44.482909 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:44.484142 master-0 kubenswrapper[7620]: I0318 08:59:44.482904 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:45.452050 master-0 kubenswrapper[7620]: I0318 08:59:45.451971 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/cluster-cloud-controller-manager/0.log" Mar 18 08:59:45.452050 master-0 kubenswrapper[7620]: I0318 08:59:45.452055 7620 generic.go:334] "Generic (PLEG): container finished" podID="ccf74af5-d4fd-4ed3-9784-42397ea798c5" containerID="eaad38e5e9adf0c7d9032d4d158adc24f0ed091bb2d04b70f67f104373652877" exitCode=1 Mar 18 08:59:45.452410 master-0 kubenswrapper[7620]: I0318 08:59:45.452101 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" event={"ID":"ccf74af5-d4fd-4ed3-9784-42397ea798c5","Type":"ContainerDied","Data":"eaad38e5e9adf0c7d9032d4d158adc24f0ed091bb2d04b70f67f104373652877"} Mar 18 08:59:45.453004 master-0 kubenswrapper[7620]: I0318 08:59:45.452954 7620 scope.go:117] "RemoveContainer" containerID="eaad38e5e9adf0c7d9032d4d158adc24f0ed091bb2d04b70f67f104373652877" Mar 18 08:59:45.482327 master-0 kubenswrapper[7620]: I0318 08:59:45.482257 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:45.482327 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:45.482327 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:45.482327 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:45.482687 master-0 kubenswrapper[7620]: I0318 08:59:45.482354 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:45.939132 master-0 kubenswrapper[7620]: E0318 08:59:45.939005 7620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33a5c021_23c3_4a97_b5f3_77fd6dcba1ab.slice/crio-conmon-93249f7db2dc0c3a5b0fe1351b49e56d1937b973c4c8c817cae063e4b26914a3.scope\": RecentStats: unable to find data in memory cache]" Mar 18 08:59:46.167846 master-0 kubenswrapper[7620]: E0318 08:59:46.167742 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 08:59:46.170383 master-0 kubenswrapper[7620]: E0318 08:59:46.170252 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:59:46.170982 master-0 kubenswrapper[7620]: I0318 08:59:46.170917 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"45cfdaa8068d2e50c19b669b85c533c5500c1a18e949ad70a5ede8a514a84af0"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 08:59:46.171166 master-0 kubenswrapper[7620]: I0318 08:59:46.171081 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" containerID="cri-o://45cfdaa8068d2e50c19b669b85c533c5500c1a18e949ad70a5ede8a514a84af0" gracePeriod=30 Mar 18 08:59:46.466327 master-0 kubenswrapper[7620]: I0318 08:59:46.466253 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-chjqr_33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/manager/1.log" Mar 18 08:59:46.472659 master-0 kubenswrapper[7620]: I0318 08:59:46.472576 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-chjqr_33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/manager/0.log" Mar 18 08:59:46.472898 master-0 kubenswrapper[7620]: I0318 08:59:46.472658 7620 generic.go:334] "Generic (PLEG): container finished" podID="33a5c021-23c3-4a97-b5f3-77fd6dcba1ab" containerID="93249f7db2dc0c3a5b0fe1351b49e56d1937b973c4c8c817cae063e4b26914a3" exitCode=1 Mar 18 08:59:46.472898 master-0 kubenswrapper[7620]: I0318 08:59:46.472775 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" event={"ID":"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab","Type":"ContainerDied","Data":"93249f7db2dc0c3a5b0fe1351b49e56d1937b973c4c8c817cae063e4b26914a3"} Mar 18 08:59:46.472898 master-0 kubenswrapper[7620]: I0318 08:59:46.472869 7620 scope.go:117] "RemoveContainer" containerID="90143bd188df252a12ebaece10ff43bd805ca65e0b3a851506a5ecef442477c4" Mar 18 08:59:46.473778 master-0 kubenswrapper[7620]: I0318 08:59:46.473727 7620 scope.go:117] "RemoveContainer" containerID="93249f7db2dc0c3a5b0fe1351b49e56d1937b973c4c8c817cae063e4b26914a3" Mar 18 08:59:46.474249 master-0 kubenswrapper[7620]: E0318 08:59:46.474202 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-chjqr_openshift-operator-controller(33a5c021-23c3-4a97-b5f3-77fd6dcba1ab)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" podUID="33a5c021-23c3-4a97-b5f3-77fd6dcba1ab" Mar 18 08:59:46.481659 master-0 kubenswrapper[7620]: I0318 08:59:46.481588 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:46.481659 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:46.481659 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:46.481659 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:46.482022 master-0 kubenswrapper[7620]: I0318 08:59:46.481662 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:46.484779 master-0 kubenswrapper[7620]: I0318 08:59:46.484722 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/cluster-cloud-controller-manager/0.log" Mar 18 08:59:46.484951 master-0 kubenswrapper[7620]: I0318 08:59:46.484874 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" event={"ID":"ccf74af5-d4fd-4ed3-9784-42397ea798c5","Type":"ContainerStarted","Data":"45a46352c889a2850287ed2db095358a3ae3d2cc6bdcab4e9ad389577dc29fbe"} Mar 18 08:59:46.486902 master-0 kubenswrapper[7620]: I0318 08:59:46.486876 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/0.log" Mar 18 08:59:46.487060 master-0 kubenswrapper[7620]: I0318 08:59:46.486920 7620 generic.go:334] "Generic (PLEG): container finished" podID="29ba6765-61c9-4f78-8f44-570418000c5c" containerID="4bd8b99a6f02b5537643630112eefdd3136e85b5e17843dfdadb3cf7528eedf7" exitCode=1 Mar 18 08:59:46.487060 master-0 kubenswrapper[7620]: I0318 08:59:46.486946 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerDied","Data":"4bd8b99a6f02b5537643630112eefdd3136e85b5e17843dfdadb3cf7528eedf7"} Mar 18 08:59:46.487273 master-0 kubenswrapper[7620]: I0318 08:59:46.487247 7620 scope.go:117] "RemoveContainer" containerID="4bd8b99a6f02b5537643630112eefdd3136e85b5e17843dfdadb3cf7528eedf7" Mar 18 08:59:47.340357 master-0 kubenswrapper[7620]: E0318 08:59:47.340015 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 08:59:47.481840 master-0 kubenswrapper[7620]: I0318 08:59:47.481753 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:47.481840 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:47.481840 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:47.481840 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:47.482294 master-0 kubenswrapper[7620]: I0318 08:59:47.481849 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:47.499481 master-0 kubenswrapper[7620]: I0318 08:59:47.499406 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/1.log" Mar 18 08:59:47.501176 master-0 kubenswrapper[7620]: I0318 08:59:47.501121 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"a8de6e40ce2cb521c2ecf8231ab8f8248b2d78098f13765fb00318fce72caaa6"} Mar 18 08:59:47.501564 master-0 kubenswrapper[7620]: I0318 08:59:47.501522 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 08:59:47.501564 master-0 kubenswrapper[7620]: I0318 08:59:47.501557 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 08:59:47.506001 master-0 kubenswrapper[7620]: I0318 08:59:47.505959 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/0.log" Mar 18 08:59:47.506128 master-0 kubenswrapper[7620]: I0318 08:59:47.506087 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerStarted","Data":"2e9b23304fcd4a4d986aca969c93ced96fc0dd7e8a3bf1c965fb2f3c5cab2fe7"} Mar 18 08:59:47.509384 master-0 kubenswrapper[7620]: I0318 08:59:47.509329 7620 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="070d05778f03eb8121f42051c1852470fb61e1c95f54e85ee0be41826b2301b3" exitCode=0 Mar 18 08:59:47.509565 master-0 kubenswrapper[7620]: I0318 08:59:47.509419 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"070d05778f03eb8121f42051c1852470fb61e1c95f54e85ee0be41826b2301b3"} Mar 18 08:59:47.510946 master-0 kubenswrapper[7620]: I0318 08:59:47.510890 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 08:59:47.511149 master-0 kubenswrapper[7620]: I0318 08:59:47.510969 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 08:59:47.512119 master-0 kubenswrapper[7620]: I0318 08:59:47.512064 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-chjqr_33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/manager/1.log" Mar 18 08:59:48.482227 master-0 kubenswrapper[7620]: I0318 08:59:48.482138 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:48.482227 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:48.482227 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:48.482227 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:48.483276 master-0 kubenswrapper[7620]: I0318 08:59:48.482242 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:49.481677 master-0 kubenswrapper[7620]: I0318 08:59:49.481580 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:49.481677 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:49.481677 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:49.481677 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:49.483036 master-0 kubenswrapper[7620]: I0318 08:59:49.481698 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:50.482781 master-0 kubenswrapper[7620]: I0318 08:59:50.482693 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:50.482781 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:50.482781 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:50.482781 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:50.483626 master-0 kubenswrapper[7620]: I0318 08:59:50.482806 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:51.482637 master-0 kubenswrapper[7620]: I0318 08:59:51.482533 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:51.482637 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:51.482637 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:51.482637 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:51.483696 master-0 kubenswrapper[7620]: I0318 08:59:51.482672 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:52.481803 master-0 kubenswrapper[7620]: I0318 08:59:52.481718 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:52.481803 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:52.481803 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:52.481803 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:52.482292 master-0 kubenswrapper[7620]: I0318 08:59:52.481819 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:53.481741 master-0 kubenswrapper[7620]: I0318 08:59:53.481662 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:53.481741 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:53.481741 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:53.481741 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:53.482355 master-0 kubenswrapper[7620]: I0318 08:59:53.481763 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:54.243795 master-0 kubenswrapper[7620]: E0318 08:59:54.243717 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:54.482473 master-0 kubenswrapper[7620]: I0318 08:59:54.482408 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:54.482473 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:54.482473 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:54.482473 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:54.482473 master-0 kubenswrapper[7620]: I0318 08:59:54.482487 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:55.482359 master-0 kubenswrapper[7620]: I0318 08:59:55.482277 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:55.482359 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:55.482359 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:55.482359 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:55.483064 master-0 kubenswrapper[7620]: I0318 08:59:55.482394 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:55.501479 master-0 kubenswrapper[7620]: I0318 08:59:55.501333 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 08:59:55.502629 master-0 kubenswrapper[7620]: I0318 08:59:55.502589 7620 scope.go:117] "RemoveContainer" containerID="93249f7db2dc0c3a5b0fe1351b49e56d1937b973c4c8c817cae063e4b26914a3" Mar 18 08:59:55.503009 master-0 kubenswrapper[7620]: E0318 08:59:55.502961 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-chjqr_openshift-operator-controller(33a5c021-23c3-4a97-b5f3-77fd6dcba1ab)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" podUID="33a5c021-23c3-4a97-b5f3-77fd6dcba1ab" Mar 18 08:59:56.252554 master-0 kubenswrapper[7620]: I0318 08:59:56.252486 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:59:56.252918 master-0 kubenswrapper[7620]: I0318 08:59:56.252875 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 08:59:56.482767 master-0 kubenswrapper[7620]: I0318 08:59:56.482694 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:56.482767 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:56.482767 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:56.482767 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:56.483782 master-0 kubenswrapper[7620]: I0318 08:59:56.482800 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:56.624955 master-0 kubenswrapper[7620]: I0318 08:59:56.624890 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/config-sync-controllers/0.log" Mar 18 08:59:56.625696 master-0 kubenswrapper[7620]: I0318 08:59:56.625656 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/cluster-cloud-controller-manager/0.log" Mar 18 08:59:56.625947 master-0 kubenswrapper[7620]: I0318 08:59:56.625910 7620 generic.go:334] "Generic (PLEG): container finished" podID="ccf74af5-d4fd-4ed3-9784-42397ea798c5" containerID="186b22d65f0d4470eb32e6b82579dc544a089964b2ec507b602aabe9b3c9e6c1" exitCode=1 Mar 18 08:59:56.626106 master-0 kubenswrapper[7620]: I0318 08:59:56.626046 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" event={"ID":"ccf74af5-d4fd-4ed3-9784-42397ea798c5","Type":"ContainerDied","Data":"186b22d65f0d4470eb32e6b82579dc544a089964b2ec507b602aabe9b3c9e6c1"} Mar 18 08:59:56.627026 master-0 kubenswrapper[7620]: I0318 08:59:56.626979 7620 scope.go:117] "RemoveContainer" containerID="186b22d65f0d4470eb32e6b82579dc544a089964b2ec507b602aabe9b3c9e6c1" Mar 18 08:59:57.483252 master-0 kubenswrapper[7620]: I0318 08:59:57.483158 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:57.483252 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:57.483252 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:57.483252 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:57.483252 master-0 kubenswrapper[7620]: I0318 08:59:57.483238 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:57.639239 master-0 kubenswrapper[7620]: I0318 08:59:57.639166 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/config-sync-controllers/0.log" Mar 18 08:59:57.640124 master-0 kubenswrapper[7620]: I0318 08:59:57.640081 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/cluster-cloud-controller-manager/0.log" Mar 18 08:59:57.640208 master-0 kubenswrapper[7620]: I0318 08:59:57.640154 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" event={"ID":"ccf74af5-d4fd-4ed3-9784-42397ea798c5","Type":"ContainerStarted","Data":"b83fd94550bfa784ea3a52c6b51e9465566611b88e5dda6ffa4edf65c8d383d7"} Mar 18 08:59:58.482653 master-0 kubenswrapper[7620]: I0318 08:59:58.482571 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:58.482653 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:58.482653 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:58.482653 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:58.483319 master-0 kubenswrapper[7620]: I0318 08:59:58.482674 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:59.254192 master-0 kubenswrapper[7620]: I0318 08:59:59.254058 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 08:59:59.254542 master-0 kubenswrapper[7620]: I0318 08:59:59.254213 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:59.483264 master-0 kubenswrapper[7620]: I0318 08:59:59.483178 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:59.483264 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 08:59:59.483264 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 08:59:59.483264 master-0 kubenswrapper[7620]: healthz check failed Mar 18 08:59:59.484379 master-0 kubenswrapper[7620]: I0318 08:59:59.483296 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:00.482440 master-0 kubenswrapper[7620]: I0318 09:00:00.482337 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:00.482440 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:00.482440 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:00.482440 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:00.483275 master-0 kubenswrapper[7620]: I0318 09:00:00.482442 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:00.540482 master-0 kubenswrapper[7620]: E0318 09:00:00.540400 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 18 09:00:00.668008 master-0 kubenswrapper[7620]: I0318 09:00:00.667942 7620 generic.go:334] "Generic (PLEG): container finished" podID="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" containerID="75d1410d48296cb4f2446dcf35dcfdb58ad3083bc984cecb00db26ae1fc3d758" exitCode=0 Mar 18 09:00:00.668008 master-0 kubenswrapper[7620]: I0318 09:00:00.668017 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" event={"ID":"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe","Type":"ContainerDied","Data":"75d1410d48296cb4f2446dcf35dcfdb58ad3083bc984cecb00db26ae1fc3d758"} Mar 18 09:00:00.668364 master-0 kubenswrapper[7620]: I0318 09:00:00.668082 7620 scope.go:117] "RemoveContainer" containerID="a4d8be3eaea0cde18cce25fc2e7762bfa7a4e08c4813605594a3dbbfbfb560f1" Mar 18 09:00:00.669179 master-0 kubenswrapper[7620]: I0318 09:00:00.668821 7620 scope.go:117] "RemoveContainer" containerID="75d1410d48296cb4f2446dcf35dcfdb58ad3083bc984cecb00db26ae1fc3d758" Mar 18 09:00:00.669359 master-0 kubenswrapper[7620]: E0318 09:00:00.669305 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-bcwsv_openshift-marketplace(34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" podUID="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" Mar 18 09:00:01.482631 master-0 kubenswrapper[7620]: I0318 09:00:01.482507 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:01.482631 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:01.482631 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:01.482631 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:01.482631 master-0 kubenswrapper[7620]: I0318 09:00:01.482618 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:02.482551 master-0 kubenswrapper[7620]: I0318 09:00:02.482446 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:02.482551 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:02.482551 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:02.482551 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:02.482551 master-0 kubenswrapper[7620]: I0318 09:00:02.482548 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:03.481670 master-0 kubenswrapper[7620]: I0318 09:00:03.481561 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:03.481670 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:03.481670 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:03.481670 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:03.482210 master-0 kubenswrapper[7620]: I0318 09:00:03.481695 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:04.245003 master-0 kubenswrapper[7620]: E0318 09:00:04.244929 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:00:04.481759 master-0 kubenswrapper[7620]: I0318 09:00:04.481655 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:04.481759 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:04.481759 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:04.481759 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:04.481759 master-0 kubenswrapper[7620]: I0318 09:00:04.481738 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:05.481719 master-0 kubenswrapper[7620]: I0318 09:00:05.481625 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:05.481719 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:05.481719 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:05.481719 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:05.482427 master-0 kubenswrapper[7620]: I0318 09:00:05.481743 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:05.500727 master-0 kubenswrapper[7620]: I0318 09:00:05.500585 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:00:05.501720 master-0 kubenswrapper[7620]: I0318 09:00:05.501678 7620 scope.go:117] "RemoveContainer" containerID="93249f7db2dc0c3a5b0fe1351b49e56d1937b973c4c8c817cae063e4b26914a3" Mar 18 09:00:05.721553 master-0 kubenswrapper[7620]: I0318 09:00:05.721522 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/1.log" Mar 18 09:00:05.722470 master-0 kubenswrapper[7620]: I0318 09:00:05.722427 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/0.log" Mar 18 09:00:05.722919 master-0 kubenswrapper[7620]: I0318 09:00:05.722892 7620 generic.go:334] "Generic (PLEG): container finished" podID="43fbd379-dd1e-4287-bd76-fd3ec51cde43" containerID="55bd80bc1088dec062336fd1b1d85e5a9546eaf4e05088f85819a8147a8e19b3" exitCode=1 Mar 18 09:00:05.723004 master-0 kubenswrapper[7620]: I0318 09:00:05.722928 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" event={"ID":"43fbd379-dd1e-4287-bd76-fd3ec51cde43","Type":"ContainerDied","Data":"55bd80bc1088dec062336fd1b1d85e5a9546eaf4e05088f85819a8147a8e19b3"} Mar 18 09:00:05.723052 master-0 kubenswrapper[7620]: I0318 09:00:05.723005 7620 scope.go:117] "RemoveContainer" containerID="c87e465727f96804a91f8100c6f9f30efed35b12da82808b53f4872a9351ab90" Mar 18 09:00:05.723655 master-0 kubenswrapper[7620]: I0318 09:00:05.723636 7620 scope.go:117] "RemoveContainer" containerID="55bd80bc1088dec062336fd1b1d85e5a9546eaf4e05088f85819a8147a8e19b3" Mar 18 09:00:05.723951 master-0 kubenswrapper[7620]: E0318 09:00:05.723917 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-phjp8_openshift-catalogd(43fbd379-dd1e-4287-bd76-fd3ec51cde43)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" podUID="43fbd379-dd1e-4287-bd76-fd3ec51cde43" Mar 18 09:00:06.235438 master-0 kubenswrapper[7620]: I0318 09:00:06.235351 7620 status_manager.go:851] "Failed to get status for pod" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods ingress-operator-66b84d69b-7h94d)" Mar 18 09:00:06.481728 master-0 kubenswrapper[7620]: I0318 09:00:06.481606 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:06.481728 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:06.481728 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:06.481728 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:06.481728 master-0 kubenswrapper[7620]: I0318 09:00:06.481709 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:06.733021 master-0 kubenswrapper[7620]: I0318 09:00:06.732937 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-chjqr_33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/manager/1.log" Mar 18 09:00:06.733726 master-0 kubenswrapper[7620]: I0318 09:00:06.733671 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" event={"ID":"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab","Type":"ContainerStarted","Data":"98e4466802f09f901dbfcdce75d0845f1102458c71f17fd2123c0f51312ba21e"} Mar 18 09:00:06.734061 master-0 kubenswrapper[7620]: I0318 09:00:06.734036 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:00:06.735833 master-0 kubenswrapper[7620]: I0318 09:00:06.735820 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/1.log" Mar 18 09:00:06.875512 master-0 kubenswrapper[7620]: I0318 09:00:06.875357 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:00:06.875512 master-0 kubenswrapper[7620]: I0318 09:00:06.875455 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:00:06.876410 master-0 kubenswrapper[7620]: I0318 09:00:06.876352 7620 scope.go:117] "RemoveContainer" containerID="55bd80bc1088dec062336fd1b1d85e5a9546eaf4e05088f85819a8147a8e19b3" Mar 18 09:00:06.876754 master-0 kubenswrapper[7620]: E0318 09:00:06.876698 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-phjp8_openshift-catalogd(43fbd379-dd1e-4287-bd76-fd3ec51cde43)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" podUID="43fbd379-dd1e-4287-bd76-fd3ec51cde43" Mar 18 09:00:07.481797 master-0 kubenswrapper[7620]: I0318 09:00:07.481706 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:07.481797 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:07.481797 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:07.481797 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:07.482810 master-0 kubenswrapper[7620]: I0318 09:00:07.481810 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:08.483410 master-0 kubenswrapper[7620]: I0318 09:00:08.483288 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:08.483410 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:08.483410 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:08.483410 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:08.484463 master-0 kubenswrapper[7620]: I0318 09:00:08.483410 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:09.254653 master-0 kubenswrapper[7620]: I0318 09:00:09.254519 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:00:09.254653 master-0 kubenswrapper[7620]: I0318 09:00:09.254617 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:00:09.282023 master-0 kubenswrapper[7620]: I0318 09:00:09.281960 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:00:09.282182 master-0 kubenswrapper[7620]: I0318 09:00:09.282043 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:00:09.282837 master-0 kubenswrapper[7620]: I0318 09:00:09.282805 7620 scope.go:117] "RemoveContainer" containerID="75d1410d48296cb4f2446dcf35dcfdb58ad3083bc984cecb00db26ae1fc3d758" Mar 18 09:00:09.283202 master-0 kubenswrapper[7620]: E0318 09:00:09.283163 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-bcwsv_openshift-marketplace(34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" podUID="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" Mar 18 09:00:09.481763 master-0 kubenswrapper[7620]: I0318 09:00:09.481676 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:09.481763 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:09.481763 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:09.481763 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:09.482252 master-0 kubenswrapper[7620]: I0318 09:00:09.481824 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:10.482833 master-0 kubenswrapper[7620]: I0318 09:00:10.482729 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:10.482833 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:10.482833 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:10.482833 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:10.482833 master-0 kubenswrapper[7620]: I0318 09:00:10.482817 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:11.481612 master-0 kubenswrapper[7620]: I0318 09:00:11.481525 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:11.481612 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:11.481612 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:11.481612 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:11.481986 master-0 kubenswrapper[7620]: I0318 09:00:11.481639 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:12.482597 master-0 kubenswrapper[7620]: I0318 09:00:12.482433 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:12.482597 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:12.482597 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:12.482597 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:12.483645 master-0 kubenswrapper[7620]: I0318 09:00:12.482579 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:12.794483 master-0 kubenswrapper[7620]: E0318 09:00:12.794278 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de35cf15b74eb kube-system 9950 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:50:16 +0000 UTC,LastTimestamp:2026-03-18 08:58:27.227916912 +0000 UTC m=+571.222698704,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:00:13.483193 master-0 kubenswrapper[7620]: I0318 09:00:13.483099 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:13.483193 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:13.483193 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:13.483193 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:13.483959 master-0 kubenswrapper[7620]: I0318 09:00:13.483205 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:14.245602 master-0 kubenswrapper[7620]: E0318 09:00:14.245503 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:00:14.245602 master-0 kubenswrapper[7620]: E0318 09:00:14.245556 7620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 09:00:14.481886 master-0 kubenswrapper[7620]: I0318 09:00:14.481792 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:14.481886 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:14.481886 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:14.481886 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:14.482423 master-0 kubenswrapper[7620]: I0318 09:00:14.482381 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:15.482124 master-0 kubenswrapper[7620]: I0318 09:00:15.482025 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:15.482124 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:15.482124 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:15.482124 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:15.482124 master-0 kubenswrapper[7620]: I0318 09:00:15.482100 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:15.503774 master-0 kubenswrapper[7620]: I0318 09:00:15.503678 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:00:16.481447 master-0 kubenswrapper[7620]: I0318 09:00:16.481371 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:16.481447 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:16.481447 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:16.481447 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:16.482934 master-0 kubenswrapper[7620]: I0318 09:00:16.481451 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:16.812736 master-0 kubenswrapper[7620]: I0318 09:00:16.812653 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/1.log" Mar 18 09:00:16.813328 master-0 kubenswrapper[7620]: I0318 09:00:16.813278 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/0.log" Mar 18 09:00:16.813426 master-0 kubenswrapper[7620]: I0318 09:00:16.813348 7620 generic.go:334] "Generic (PLEG): container finished" podID="29ba6765-61c9-4f78-8f44-570418000c5c" containerID="2e9b23304fcd4a4d986aca969c93ced96fc0dd7e8a3bf1c965fb2f3c5cab2fe7" exitCode=1 Mar 18 09:00:16.813426 master-0 kubenswrapper[7620]: I0318 09:00:16.813383 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerDied","Data":"2e9b23304fcd4a4d986aca969c93ced96fc0dd7e8a3bf1c965fb2f3c5cab2fe7"} Mar 18 09:00:16.813548 master-0 kubenswrapper[7620]: I0318 09:00:16.813459 7620 scope.go:117] "RemoveContainer" containerID="4bd8b99a6f02b5537643630112eefdd3136e85b5e17843dfdadb3cf7528eedf7" Mar 18 09:00:16.814102 master-0 kubenswrapper[7620]: I0318 09:00:16.814056 7620 scope.go:117] "RemoveContainer" containerID="2e9b23304fcd4a4d986aca969c93ced96fc0dd7e8a3bf1c965fb2f3c5cab2fe7" Mar 18 09:00:16.814450 master-0 kubenswrapper[7620]: E0318 09:00:16.814395 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-khm5n_openshift-cluster-storage-operator(29ba6765-61c9-4f78-8f44-570418000c5c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" podUID="29ba6765-61c9-4f78-8f44-570418000c5c" Mar 18 09:00:16.942768 master-0 kubenswrapper[7620]: E0318 09:00:16.942671 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:00:17.405444 master-0 kubenswrapper[7620]: I0318 09:00:17.404477 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:55488->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 18 09:00:17.405444 master-0 kubenswrapper[7620]: I0318 09:00:17.404616 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:55488->127.0.0.1:10357: read: connection reset by peer" Mar 18 09:00:17.405444 master-0 kubenswrapper[7620]: I0318 09:00:17.404692 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:00:17.482389 master-0 kubenswrapper[7620]: I0318 09:00:17.482222 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:17.482389 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:17.482389 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:17.482389 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:17.482389 master-0 kubenswrapper[7620]: I0318 09:00:17.482334 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:17.824666 master-0 kubenswrapper[7620]: I0318 09:00:17.824595 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/2.log" Mar 18 09:00:17.825319 master-0 kubenswrapper[7620]: I0318 09:00:17.825264 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/1.log" Mar 18 09:00:17.826918 master-0 kubenswrapper[7620]: I0318 09:00:17.826881 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="a8de6e40ce2cb521c2ecf8231ab8f8248b2d78098f13765fb00318fce72caaa6" exitCode=255 Mar 18 09:00:17.827063 master-0 kubenswrapper[7620]: I0318 09:00:17.826945 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerDied","Data":"a8de6e40ce2cb521c2ecf8231ab8f8248b2d78098f13765fb00318fce72caaa6"} Mar 18 09:00:17.827063 master-0 kubenswrapper[7620]: I0318 09:00:17.826981 7620 scope.go:117] "RemoveContainer" containerID="45cfdaa8068d2e50c19b669b85c533c5500c1a18e949ad70a5ede8a514a84af0" Mar 18 09:00:17.831592 master-0 kubenswrapper[7620]: I0318 09:00:17.830043 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/1.log" Mar 18 09:00:18.484177 master-0 kubenswrapper[7620]: I0318 09:00:18.484103 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:18.484177 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:18.484177 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:18.484177 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:18.485157 master-0 kubenswrapper[7620]: I0318 09:00:18.484187 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:18.841519 master-0 kubenswrapper[7620]: I0318 09:00:18.841339 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/2.log" Mar 18 09:00:19.482571 master-0 kubenswrapper[7620]: I0318 09:00:19.482395 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:19.482571 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:19.482571 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:19.482571 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:19.482571 master-0 kubenswrapper[7620]: I0318 09:00:19.482534 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:20.481981 master-0 kubenswrapper[7620]: I0318 09:00:20.481814 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:20.481981 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:20.481981 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:20.481981 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:20.482824 master-0 kubenswrapper[7620]: I0318 09:00:20.482080 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:20.862956 master-0 kubenswrapper[7620]: I0318 09:00:20.862873 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/0.log" Mar 18 09:00:20.862956 master-0 kubenswrapper[7620]: I0318 09:00:20.862954 7620 generic.go:334] "Generic (PLEG): container finished" podID="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" containerID="a6965c370aee0562c7dab05dd0bba9899ece7a915ae59774856223463957b6b4" exitCode=1 Mar 18 09:00:20.863316 master-0 kubenswrapper[7620]: I0318 09:00:20.863036 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" event={"ID":"97730ec2-e6f1-4f8c-b85c-3c10623d06ce","Type":"ContainerDied","Data":"a6965c370aee0562c7dab05dd0bba9899ece7a915ae59774856223463957b6b4"} Mar 18 09:00:20.863970 master-0 kubenswrapper[7620]: I0318 09:00:20.863931 7620 scope.go:117] "RemoveContainer" containerID="a6965c370aee0562c7dab05dd0bba9899ece7a915ae59774856223463957b6b4" Mar 18 09:00:20.865592 master-0 kubenswrapper[7620]: I0318 09:00:20.865155 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-z9n9c_d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/control-plane-machine-set-operator/0.log" Mar 18 09:00:20.865592 master-0 kubenswrapper[7620]: I0318 09:00:20.865216 7620 generic.go:334] "Generic (PLEG): container finished" podID="d6fe8ee6-737e-438a-8d9d-1ec712f6bacf" containerID="0fd3855d3d4e49dbbbd6fbd3a0b7de23ed78bc7af2b1a5b78f4de3c1bee51d0a" exitCode=1 Mar 18 09:00:20.865592 master-0 kubenswrapper[7620]: I0318 09:00:20.865256 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" event={"ID":"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf","Type":"ContainerDied","Data":"0fd3855d3d4e49dbbbd6fbd3a0b7de23ed78bc7af2b1a5b78f4de3c1bee51d0a"} Mar 18 09:00:20.866079 master-0 kubenswrapper[7620]: I0318 09:00:20.866028 7620 scope.go:117] "RemoveContainer" containerID="0fd3855d3d4e49dbbbd6fbd3a0b7de23ed78bc7af2b1a5b78f4de3c1bee51d0a" Mar 18 09:00:21.224437 master-0 kubenswrapper[7620]: I0318 09:00:21.224371 7620 scope.go:117] "RemoveContainer" containerID="55bd80bc1088dec062336fd1b1d85e5a9546eaf4e05088f85819a8147a8e19b3" Mar 18 09:00:21.483790 master-0 kubenswrapper[7620]: I0318 09:00:21.483628 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:21.483790 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:21.483790 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:21.483790 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:21.483790 master-0 kubenswrapper[7620]: I0318 09:00:21.483721 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:21.505628 master-0 kubenswrapper[7620]: E0318 09:00:21.504929 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:00:21.505628 master-0 kubenswrapper[7620]: I0318 09:00:21.505365 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a8de6e40ce2cb521c2ecf8231ab8f8248b2d78098f13765fb00318fce72caaa6"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 09:00:21.505628 master-0 kubenswrapper[7620]: I0318 09:00:21.505477 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" containerID="cri-o://a8de6e40ce2cb521c2ecf8231ab8f8248b2d78098f13765fb00318fce72caaa6" gracePeriod=30 Mar 18 09:00:21.513940 master-0 kubenswrapper[7620]: E0318 09:00:21.513882 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:00:21.876621 master-0 kubenswrapper[7620]: I0318 09:00:21.876561 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/0.log" Mar 18 09:00:21.876842 master-0 kubenswrapper[7620]: I0318 09:00:21.876741 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" event={"ID":"97730ec2-e6f1-4f8c-b85c-3c10623d06ce","Type":"ContainerStarted","Data":"ba57860bb4615dc613e8795f7f3436663ef867da7a5a525958b65d7222c4b23f"} Mar 18 09:00:21.880147 master-0 kubenswrapper[7620]: I0318 09:00:21.880085 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"8d9361279d59d84f68c69450e42602da65b59d791ddc81fa0875ca16322aadf2"} Mar 18 09:00:21.882619 master-0 kubenswrapper[7620]: I0318 09:00:21.882578 7620 generic.go:334] "Generic (PLEG): container finished" podID="edc7f629-4288-443b-aa8e-78bc6a09c848" containerID="4baf438f84441de9a2ddd79dfbe1c9dc6b19f232a4b6153cb8db1151df46918a" exitCode=0 Mar 18 09:00:21.882701 master-0 kubenswrapper[7620]: I0318 09:00:21.882671 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" event={"ID":"edc7f629-4288-443b-aa8e-78bc6a09c848","Type":"ContainerDied","Data":"4baf438f84441de9a2ddd79dfbe1c9dc6b19f232a4b6153cb8db1151df46918a"} Mar 18 09:00:21.882742 master-0 kubenswrapper[7620]: I0318 09:00:21.882712 7620 scope.go:117] "RemoveContainer" containerID="2816dd0a3b2639d48151bf75dfb86759dbb1c466295c4e9c83f4f4ac853eb6f8" Mar 18 09:00:21.883610 master-0 kubenswrapper[7620]: I0318 09:00:21.883579 7620 scope.go:117] "RemoveContainer" containerID="4baf438f84441de9a2ddd79dfbe1c9dc6b19f232a4b6153cb8db1151df46918a" Mar 18 09:00:21.883826 master-0 kubenswrapper[7620]: E0318 09:00:21.883792 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-57f769d897-bwqt7_openshift-ovn-kubernetes(edc7f629-4288-443b-aa8e-78bc6a09c848)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" podUID="edc7f629-4288-443b-aa8e-78bc6a09c848" Mar 18 09:00:21.887124 master-0 kubenswrapper[7620]: I0318 09:00:21.887091 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-z9n9c_d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/control-plane-machine-set-operator/0.log" Mar 18 09:00:21.887187 master-0 kubenswrapper[7620]: I0318 09:00:21.887155 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" event={"ID":"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf","Type":"ContainerStarted","Data":"17eebddde35c90add78a7bf20baf5f048dc44783179ac30dd727fc56fd06a269"} Mar 18 09:00:21.891504 master-0 kubenswrapper[7620]: I0318 09:00:21.890222 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/1.log" Mar 18 09:00:21.891504 master-0 kubenswrapper[7620]: I0318 09:00:21.890541 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" event={"ID":"43fbd379-dd1e-4287-bd76-fd3ec51cde43","Type":"ContainerStarted","Data":"da0b7b0884e6ecb1ef9531744c11b52d61b850cbf92f1438d3acbb3217dcbab5"} Mar 18 09:00:21.891504 master-0 kubenswrapper[7620]: I0318 09:00:21.891109 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:00:21.894951 master-0 kubenswrapper[7620]: I0318 09:00:21.894885 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/2.log" Mar 18 09:00:21.896041 master-0 kubenswrapper[7620]: I0318 09:00:21.895987 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"d3073dc46ac31370e3b380a38f0a5624ea2c98824ecd27b578b4114468b40e36"} Mar 18 09:00:22.481304 master-0 kubenswrapper[7620]: I0318 09:00:22.481140 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:22.481304 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:22.481304 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:22.481304 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:22.481304 master-0 kubenswrapper[7620]: I0318 09:00:22.481220 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:22.914604 master-0 kubenswrapper[7620]: I0318 09:00:22.914523 7620 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="8d9361279d59d84f68c69450e42602da65b59d791ddc81fa0875ca16322aadf2" exitCode=0 Mar 18 09:00:22.915441 master-0 kubenswrapper[7620]: I0318 09:00:22.914915 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:00:22.915441 master-0 kubenswrapper[7620]: I0318 09:00:22.914956 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:00:22.915575 master-0 kubenswrapper[7620]: I0318 09:00:22.915471 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:00:22.915575 master-0 kubenswrapper[7620]: I0318 09:00:22.915486 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:00:22.915786 master-0 kubenswrapper[7620]: I0318 09:00:22.915740 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"8d9361279d59d84f68c69450e42602da65b59d791ddc81fa0875ca16322aadf2"} Mar 18 09:00:23.483106 master-0 kubenswrapper[7620]: I0318 09:00:23.482994 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:23.483106 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:00:23.483106 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:00:23.483106 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:00:23.483525 master-0 kubenswrapper[7620]: I0318 09:00:23.483134 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:23.483525 master-0 kubenswrapper[7620]: I0318 09:00:23.483234 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:00:23.484341 master-0 kubenswrapper[7620]: I0318 09:00:23.484294 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"4a7dbd9949adb4dd8d63e9de3470c7186002c65ba78caccdd813c4fb43556282"} pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" containerMessage="Container router failed startup probe, will be restarted" Mar 18 09:00:23.484410 master-0 kubenswrapper[7620]: I0318 09:00:23.484358 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" containerID="cri-o://4a7dbd9949adb4dd8d63e9de3470c7186002c65ba78caccdd813c4fb43556282" gracePeriod=3600 Mar 18 09:00:24.225745 master-0 kubenswrapper[7620]: I0318 09:00:24.225624 7620 scope.go:117] "RemoveContainer" containerID="75d1410d48296cb4f2446dcf35dcfdb58ad3083bc984cecb00db26ae1fc3d758" Mar 18 09:00:24.935478 master-0 kubenswrapper[7620]: I0318 09:00:24.935375 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" event={"ID":"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe","Type":"ContainerStarted","Data":"6e92c769d9c45cb0821669a8b7574a372860e2d7111a0a59b3e08fac2596304e"} Mar 18 09:00:24.935907 master-0 kubenswrapper[7620]: I0318 09:00:24.935823 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:00:24.941018 master-0 kubenswrapper[7620]: I0318 09:00:24.940537 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:00:26.253589 master-0 kubenswrapper[7620]: I0318 09:00:26.253485 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:00:26.253589 master-0 kubenswrapper[7620]: I0318 09:00:26.253583 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:00:26.877940 master-0 kubenswrapper[7620]: I0318 09:00:26.877816 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:00:28.223949 master-0 kubenswrapper[7620]: I0318 09:00:28.223902 7620 scope.go:117] "RemoveContainer" containerID="2e9b23304fcd4a4d986aca969c93ced96fc0dd7e8a3bf1c965fb2f3c5cab2fe7" Mar 18 09:00:28.973823 master-0 kubenswrapper[7620]: I0318 09:00:28.973721 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/1.log" Mar 18 09:00:28.973823 master-0 kubenswrapper[7620]: I0318 09:00:28.973820 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerStarted","Data":"515ed36be86ba90059406282f67d51d2a047f6233d3a9d4c91573ed7eff6be87"} Mar 18 09:00:29.254500 master-0 kubenswrapper[7620]: I0318 09:00:29.254371 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:00:29.254500 master-0 kubenswrapper[7620]: I0318 09:00:29.254464 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:00:29.574942 master-0 kubenswrapper[7620]: I0318 09:00:29.574684 7620 patch_prober.go:28] interesting pod/controller-manager-6448dc88d8-cnd9q container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Mar 18 09:00:29.574942 master-0 kubenswrapper[7620]: I0318 09:00:29.574703 7620 patch_prober.go:28] interesting pod/controller-manager-6448dc88d8-cnd9q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" start-of-body= Mar 18 09:00:29.574942 master-0 kubenswrapper[7620]: I0318 09:00:29.574765 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Mar 18 09:00:29.574942 master-0 kubenswrapper[7620]: I0318 09:00:29.574800 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": dial tcp 10.128.0.52:8443: connect: connection refused" Mar 18 09:00:29.991793 master-0 kubenswrapper[7620]: I0318 09:00:29.991683 7620 generic.go:334] "Generic (PLEG): container finished" podID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerID="c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882" exitCode=0 Mar 18 09:00:29.992116 master-0 kubenswrapper[7620]: I0318 09:00:29.991778 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" event={"ID":"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75","Type":"ContainerDied","Data":"c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882"} Mar 18 09:00:29.992116 master-0 kubenswrapper[7620]: I0318 09:00:29.991911 7620 scope.go:117] "RemoveContainer" containerID="f95c3ae9a15c386971b5456139d5edf2668059a7f470b16505d0edd6a91106f8" Mar 18 09:00:29.992906 master-0 kubenswrapper[7620]: I0318 09:00:29.992619 7620 scope.go:117] "RemoveContainer" containerID="c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882" Mar 18 09:00:29.993048 master-0 kubenswrapper[7620]: E0318 09:00:29.993006 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-6448dc88d8-cnd9q_openshift-controller-manager(4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75)\"" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" Mar 18 09:00:33.944803 master-0 kubenswrapper[7620]: E0318 09:00:33.944719 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:00:34.225314 master-0 kubenswrapper[7620]: I0318 09:00:34.225092 7620 scope.go:117] "RemoveContainer" containerID="4baf438f84441de9a2ddd79dfbe1c9dc6b19f232a4b6153cb8db1151df46918a" Mar 18 09:00:35.039687 master-0 kubenswrapper[7620]: I0318 09:00:35.039603 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" event={"ID":"edc7f629-4288-443b-aa8e-78bc6a09c848","Type":"ContainerStarted","Data":"cceb4b11cfbf57d330bbe32964f47c0c5723b5d40a4fb96935f5a6082c4ea092"} Mar 18 09:00:39.253681 master-0 kubenswrapper[7620]: I0318 09:00:39.253528 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:00:39.253681 master-0 kubenswrapper[7620]: I0318 09:00:39.253644 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:00:39.616299 master-0 kubenswrapper[7620]: I0318 09:00:39.615560 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:00:39.616299 master-0 kubenswrapper[7620]: I0318 09:00:39.616026 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:00:39.616299 master-0 kubenswrapper[7620]: I0318 09:00:39.616141 7620 scope.go:117] "RemoveContainer" containerID="c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882" Mar 18 09:00:40.084149 master-0 kubenswrapper[7620]: I0318 09:00:40.084062 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" event={"ID":"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75","Type":"ContainerStarted","Data":"06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c"} Mar 18 09:00:40.084814 master-0 kubenswrapper[7620]: I0318 09:00:40.084745 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:00:40.092574 master-0 kubenswrapper[7620]: I0318 09:00:40.092504 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:00:43.108721 master-0 kubenswrapper[7620]: I0318 09:00:43.108603 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-87vpl_495e0cff-fca8-4dad-9247-2fc0e7ce86fc/machine-approver-controller/0.log" Mar 18 09:00:43.109245 master-0 kubenswrapper[7620]: I0318 09:00:43.109167 7620 generic.go:334] "Generic (PLEG): container finished" podID="495e0cff-fca8-4dad-9247-2fc0e7ce86fc" containerID="482a2a455c91ae8f75a1b491f54c3f841099d7f9c064cccb7d26f482c03b17d7" exitCode=255 Mar 18 09:00:43.109245 master-0 kubenswrapper[7620]: I0318 09:00:43.109228 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" event={"ID":"495e0cff-fca8-4dad-9247-2fc0e7ce86fc","Type":"ContainerDied","Data":"482a2a455c91ae8f75a1b491f54c3f841099d7f9c064cccb7d26f482c03b17d7"} Mar 18 09:00:43.110100 master-0 kubenswrapper[7620]: I0318 09:00:43.110061 7620 scope.go:117] "RemoveContainer" containerID="482a2a455c91ae8f75a1b491f54c3f841099d7f9c064cccb7d26f482c03b17d7" Mar 18 09:00:44.121600 master-0 kubenswrapper[7620]: I0318 09:00:44.121536 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-87vpl_495e0cff-fca8-4dad-9247-2fc0e7ce86fc/machine-approver-controller/0.log" Mar 18 09:00:44.123351 master-0 kubenswrapper[7620]: I0318 09:00:44.122495 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" event={"ID":"495e0cff-fca8-4dad-9247-2fc0e7ce86fc","Type":"ContainerStarted","Data":"702c18fadc7922dfc97154a8f944d7cbead8cd6ee8505bcb819a54249e6313c2"} Mar 18 09:00:46.798200 master-0 kubenswrapper[7620]: E0318 09:00:46.798057 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de35cfeffdaae kube-system 9952 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:50:16 +0000 UTC,LastTimestamp:2026-03-18 08:58:27.532216009 +0000 UTC m=+571.526997811,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:00:49.253425 master-0 kubenswrapper[7620]: I0318 09:00:49.253322 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:00:49.254101 master-0 kubenswrapper[7620]: I0318 09:00:49.253420 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:00:49.254101 master-0 kubenswrapper[7620]: I0318 09:00:49.253488 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:00:50.947986 master-0 kubenswrapper[7620]: E0318 09:00:50.947389 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:00:53.198272 master-0 kubenswrapper[7620]: I0318 09:00:53.198183 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/3.log" Mar 18 09:00:53.199274 master-0 kubenswrapper[7620]: I0318 09:00:53.199185 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/2.log" Mar 18 09:00:53.200797 master-0 kubenswrapper[7620]: I0318 09:00:53.200733 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="d3073dc46ac31370e3b380a38f0a5624ea2c98824ecd27b578b4114468b40e36" exitCode=255 Mar 18 09:00:53.200998 master-0 kubenswrapper[7620]: I0318 09:00:53.200801 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerDied","Data":"d3073dc46ac31370e3b380a38f0a5624ea2c98824ecd27b578b4114468b40e36"} Mar 18 09:00:53.200998 master-0 kubenswrapper[7620]: I0318 09:00:53.200905 7620 scope.go:117] "RemoveContainer" containerID="a8de6e40ce2cb521c2ecf8231ab8f8248b2d78098f13765fb00318fce72caaa6" Mar 18 09:00:54.211617 master-0 kubenswrapper[7620]: I0318 09:00:54.211525 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/3.log" Mar 18 09:00:56.918281 master-0 kubenswrapper[7620]: E0318 09:00:56.918139 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:00:56.918955 master-0 kubenswrapper[7620]: E0318 09:00:56.918544 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:00:56.918955 master-0 kubenswrapper[7620]: I0318 09:00:56.918798 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"d3073dc46ac31370e3b380a38f0a5624ea2c98824ecd27b578b4114468b40e36"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 09:00:56.919134 master-0 kubenswrapper[7620]: I0318 09:00:56.918950 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" containerID="cri-o://d3073dc46ac31370e3b380a38f0a5624ea2c98824ecd27b578b4114468b40e36" gracePeriod=30 Mar 18 09:00:57.238803 master-0 kubenswrapper[7620]: I0318 09:00:57.238747 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"112a95a0ecbb7e902166f830971fb87997d7e03daddc43d6c1037eba7ffe50d4"} Mar 18 09:00:57.241300 master-0 kubenswrapper[7620]: I0318 09:00:57.241260 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/3.log" Mar 18 09:00:57.242603 master-0 kubenswrapper[7620]: I0318 09:00:57.242562 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"20f67081f1a83df8fa8825fe68b2011f445e7f6dd6a012bd23cbd198b1272dee"} Mar 18 09:00:57.242904 master-0 kubenswrapper[7620]: I0318 09:00:57.242872 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:00:57.242904 master-0 kubenswrapper[7620]: I0318 09:00:57.242897 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:00:58.272062 master-0 kubenswrapper[7620]: I0318 09:00:58.271995 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"fea5c61028d6f5a8c5c0e3c0cf483e32008841fc099a5bd1b2de142c89560c9b"} Mar 18 09:00:58.272648 master-0 kubenswrapper[7620]: I0318 09:00:58.272081 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"2367b625367ed8557fb256a68af6cdc71a881e71bc9abf0a04640ca6a4bbcdc8"} Mar 18 09:00:58.272648 master-0 kubenswrapper[7620]: I0318 09:00:58.272096 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"bb218c54057c5adaf7c587bdc57fb89f6a61886040b1c8a6b6b58d51f19f2738"} Mar 18 09:00:59.286962 master-0 kubenswrapper[7620]: I0318 09:00:59.286075 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/2.log" Mar 18 09:00:59.286962 master-0 kubenswrapper[7620]: I0318 09:00:59.286821 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/1.log" Mar 18 09:00:59.286962 master-0 kubenswrapper[7620]: I0318 09:00:59.286942 7620 generic.go:334] "Generic (PLEG): container finished" podID="29ba6765-61c9-4f78-8f44-570418000c5c" containerID="515ed36be86ba90059406282f67d51d2a047f6233d3a9d4c91573ed7eff6be87" exitCode=1 Mar 18 09:00:59.288391 master-0 kubenswrapper[7620]: I0318 09:00:59.287075 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerDied","Data":"515ed36be86ba90059406282f67d51d2a047f6233d3a9d4c91573ed7eff6be87"} Mar 18 09:00:59.288391 master-0 kubenswrapper[7620]: I0318 09:00:59.287140 7620 scope.go:117] "RemoveContainer" containerID="2e9b23304fcd4a4d986aca969c93ced96fc0dd7e8a3bf1c965fb2f3c5cab2fe7" Mar 18 09:00:59.288391 master-0 kubenswrapper[7620]: I0318 09:00:59.288128 7620 scope.go:117] "RemoveContainer" containerID="515ed36be86ba90059406282f67d51d2a047f6233d3a9d4c91573ed7eff6be87" Mar 18 09:00:59.288705 master-0 kubenswrapper[7620]: E0318 09:00:59.288645 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-khm5n_openshift-cluster-storage-operator(29ba6765-61c9-4f78-8f44-570418000c5c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" podUID="29ba6765-61c9-4f78-8f44-570418000c5c" Mar 18 09:00:59.297596 master-0 kubenswrapper[7620]: I0318 09:00:59.297533 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"ec078e5fb5c6af91fa9756d663010f378e1c2f5cbae267347ef882fcddb85660"} Mar 18 09:00:59.298102 master-0 kubenswrapper[7620]: I0318 09:00:59.298048 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:00:59.298102 master-0 kubenswrapper[7620]: I0318 09:00:59.298088 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:01:00.309258 master-0 kubenswrapper[7620]: I0318 09:01:00.309161 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/2.log" Mar 18 09:01:01.258904 master-0 kubenswrapper[7620]: I0318 09:01:01.258816 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:01.259157 master-0 kubenswrapper[7620]: I0318 09:01:01.258926 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:06.237417 master-0 kubenswrapper[7620]: I0318 09:01:06.237316 7620 status_manager.go:851] "Failed to get status for pod" podUID="c83737980b9ee109184b1d78e942cf36" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Mar 18 09:01:06.252575 master-0 kubenswrapper[7620]: I0318 09:01:06.252508 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:01:06.252760 master-0 kubenswrapper[7620]: I0318 09:01:06.252718 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:01:07.948311 master-0 kubenswrapper[7620]: E0318 09:01:07.948252 7620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:01:08.586175 master-0 kubenswrapper[7620]: I0318 09:01:08.586101 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:01:11.297369 master-0 kubenswrapper[7620]: I0318 09:01:11.297275 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:11.396075 master-0 kubenswrapper[7620]: I0318 09:01:11.395950 7620 generic.go:334] "Generic (PLEG): container finished" podID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerID="4a7dbd9949adb4dd8d63e9de3470c7186002c65ba78caccdd813c4fb43556282" exitCode=0 Mar 18 09:01:11.396075 master-0 kubenswrapper[7620]: I0318 09:01:11.396021 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerDied","Data":"4a7dbd9949adb4dd8d63e9de3470c7186002c65ba78caccdd813c4fb43556282"} Mar 18 09:01:11.396075 master-0 kubenswrapper[7620]: I0318 09:01:11.396063 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerStarted","Data":"a4436209a1c80a403c36e67bb8b4310cdae3c04ffc3d3675bb5372419c24b948"} Mar 18 09:01:11.396075 master-0 kubenswrapper[7620]: I0318 09:01:11.396092 7620 scope.go:117] "RemoveContainer" containerID="504f021a6115c5b248227cad9be5358b605b45e875884611b5163b1993a0ac66" Mar 18 09:01:11.479588 master-0 kubenswrapper[7620]: I0318 09:01:11.479515 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:01:11.483076 master-0 kubenswrapper[7620]: I0318 09:01:11.482937 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:11.483076 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:11.483076 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:11.483076 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:11.483076 master-0 kubenswrapper[7620]: I0318 09:01:11.483007 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:12.482743 master-0 kubenswrapper[7620]: I0318 09:01:12.482647 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:12.482743 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:12.482743 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:12.482743 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:12.482743 master-0 kubenswrapper[7620]: I0318 09:01:12.482739 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:13.225208 master-0 kubenswrapper[7620]: I0318 09:01:13.225047 7620 scope.go:117] "RemoveContainer" containerID="515ed36be86ba90059406282f67d51d2a047f6233d3a9d4c91573ed7eff6be87" Mar 18 09:01:13.225661 master-0 kubenswrapper[7620]: E0318 09:01:13.225599 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-khm5n_openshift-cluster-storage-operator(29ba6765-61c9-4f78-8f44-570418000c5c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" podUID="29ba6765-61c9-4f78-8f44-570418000c5c" Mar 18 09:01:13.481773 master-0 kubenswrapper[7620]: I0318 09:01:13.481578 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:13.481773 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:13.481773 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:13.481773 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:13.481773 master-0 kubenswrapper[7620]: I0318 09:01:13.481674 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:14.483287 master-0 kubenswrapper[7620]: I0318 09:01:14.483115 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:14.483287 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:14.483287 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:14.483287 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:14.483287 master-0 kubenswrapper[7620]: I0318 09:01:14.483273 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:15.482043 master-0 kubenswrapper[7620]: I0318 09:01:15.481961 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:15.482043 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:15.482043 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:15.482043 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:15.482421 master-0 kubenswrapper[7620]: I0318 09:01:15.482072 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:16.258884 master-0 kubenswrapper[7620]: I0318 09:01:16.258787 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:01:16.281025 master-0 kubenswrapper[7620]: I0318 09:01:16.280964 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:16.479379 master-0 kubenswrapper[7620]: I0318 09:01:16.479299 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:01:16.482335 master-0 kubenswrapper[7620]: I0318 09:01:16.482266 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:16.482335 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:16.482335 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:16.482335 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:16.482741 master-0 kubenswrapper[7620]: I0318 09:01:16.482355 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:17.482516 master-0 kubenswrapper[7620]: I0318 09:01:17.482418 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:17.482516 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:17.482516 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:17.482516 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:17.483672 master-0 kubenswrapper[7620]: I0318 09:01:17.482539 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:18.483449 master-0 kubenswrapper[7620]: I0318 09:01:18.483360 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:18.483449 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:18.483449 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:18.483449 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:18.484399 master-0 kubenswrapper[7620]: I0318 09:01:18.483479 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:19.481572 master-0 kubenswrapper[7620]: I0318 09:01:19.481469 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:19.481572 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:19.481572 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:19.481572 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:19.481966 master-0 kubenswrapper[7620]: I0318 09:01:19.481596 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:20.483222 master-0 kubenswrapper[7620]: I0318 09:01:20.483130 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:20.483222 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:20.483222 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:20.483222 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:20.483843 master-0 kubenswrapper[7620]: I0318 09:01:20.483247 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:20.801723 master-0 kubenswrapper[7620]: E0318 09:01:20.801298 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de35cffb44edc kube-system 9953 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:50:16 +0000 UTC,LastTimestamp:2026-03-18 08:58:27.556729745 +0000 UTC m=+571.551511497,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:01:21.480191 master-0 kubenswrapper[7620]: I0318 09:01:21.480086 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/1.log" Mar 18 09:01:21.481005 master-0 kubenswrapper[7620]: I0318 09:01:21.480972 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/0.log" Mar 18 09:01:21.481084 master-0 kubenswrapper[7620]: I0318 09:01:21.481032 7620 generic.go:334] "Generic (PLEG): container finished" podID="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" containerID="ba57860bb4615dc613e8795f7f3436663ef867da7a5a525958b65d7222c4b23f" exitCode=1 Mar 18 09:01:21.481084 master-0 kubenswrapper[7620]: I0318 09:01:21.481068 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" event={"ID":"97730ec2-e6f1-4f8c-b85c-3c10623d06ce","Type":"ContainerDied","Data":"ba57860bb4615dc613e8795f7f3436663ef867da7a5a525958b65d7222c4b23f"} Mar 18 09:01:21.481182 master-0 kubenswrapper[7620]: I0318 09:01:21.481108 7620 scope.go:117] "RemoveContainer" containerID="a6965c370aee0562c7dab05dd0bba9899ece7a915ae59774856223463957b6b4" Mar 18 09:01:21.481427 master-0 kubenswrapper[7620]: I0318 09:01:21.481378 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:21.481427 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:21.481427 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:21.481427 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:21.481607 master-0 kubenswrapper[7620]: I0318 09:01:21.481449 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:21.482510 master-0 kubenswrapper[7620]: I0318 09:01:21.482072 7620 scope.go:117] "RemoveContainer" containerID="ba57860bb4615dc613e8795f7f3436663ef867da7a5a525958b65d7222c4b23f" Mar 18 09:01:21.482510 master-0 kubenswrapper[7620]: E0318 09:01:21.482325 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-cf6qn_openshift-machine-api(97730ec2-e6f1-4f8c-b85c-3c10623d06ce)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" podUID="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" Mar 18 09:01:22.482482 master-0 kubenswrapper[7620]: I0318 09:01:22.482380 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:22.482482 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:22.482482 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:22.482482 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:22.483915 master-0 kubenswrapper[7620]: I0318 09:01:22.482503 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:22.498298 master-0 kubenswrapper[7620]: I0318 09:01:22.498227 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/1.log" Mar 18 09:01:23.481912 master-0 kubenswrapper[7620]: I0318 09:01:23.481792 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:23.481912 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:23.481912 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:23.481912 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:23.482395 master-0 kubenswrapper[7620]: I0318 09:01:23.481933 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:23.511995 master-0 kubenswrapper[7620]: I0318 09:01:23.511931 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/4.log" Mar 18 09:01:23.512912 master-0 kubenswrapper[7620]: I0318 09:01:23.512838 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/3.log" Mar 18 09:01:23.513806 master-0 kubenswrapper[7620]: I0318 09:01:23.513739 7620 generic.go:334] "Generic (PLEG): container finished" podID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" exitCode=1 Mar 18 09:01:23.513969 master-0 kubenswrapper[7620]: I0318 09:01:23.513814 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerDied","Data":"fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f"} Mar 18 09:01:23.513969 master-0 kubenswrapper[7620]: I0318 09:01:23.513910 7620 scope.go:117] "RemoveContainer" containerID="1e621180058478223aaee3c2dc23f5260e37988416b72d674dfdaa92a6a8ef11" Mar 18 09:01:23.514822 master-0 kubenswrapper[7620]: I0318 09:01:23.514765 7620 scope.go:117] "RemoveContainer" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" Mar 18 09:01:23.515313 master-0 kubenswrapper[7620]: E0318 09:01:23.515259 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 09:01:24.224978 master-0 kubenswrapper[7620]: I0318 09:01:24.224902 7620 scope.go:117] "RemoveContainer" containerID="515ed36be86ba90059406282f67d51d2a047f6233d3a9d4c91573ed7eff6be87" Mar 18 09:01:24.482503 master-0 kubenswrapper[7620]: I0318 09:01:24.482323 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:24.482503 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:24.482503 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:24.482503 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:24.482503 master-0 kubenswrapper[7620]: I0318 09:01:24.482437 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:24.524625 master-0 kubenswrapper[7620]: I0318 09:01:24.524552 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/4.log" Mar 18 09:01:24.527847 master-0 kubenswrapper[7620]: I0318 09:01:24.527774 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/2.log" Mar 18 09:01:24.528098 master-0 kubenswrapper[7620]: I0318 09:01:24.527935 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerStarted","Data":"eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067"} Mar 18 09:01:25.482398 master-0 kubenswrapper[7620]: I0318 09:01:25.482180 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:25.482398 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:25.482398 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:25.482398 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:25.482398 master-0 kubenswrapper[7620]: I0318 09:01:25.482276 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:26.481702 master-0 kubenswrapper[7620]: I0318 09:01:26.481602 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:26.481702 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:26.481702 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:26.481702 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:26.482974 master-0 kubenswrapper[7620]: I0318 09:01:26.481713 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:27.482445 master-0 kubenswrapper[7620]: I0318 09:01:27.482319 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:27.482445 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:27.482445 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:27.482445 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:27.482445 master-0 kubenswrapper[7620]: I0318 09:01:27.482430 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:28.482252 master-0 kubenswrapper[7620]: I0318 09:01:28.482149 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:28.482252 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:28.482252 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:28.482252 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:28.483466 master-0 kubenswrapper[7620]: I0318 09:01:28.482253 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:29.481580 master-0 kubenswrapper[7620]: I0318 09:01:29.481520 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:29.481580 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:29.481580 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:29.481580 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:29.481897 master-0 kubenswrapper[7620]: I0318 09:01:29.481584 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:30.481433 master-0 kubenswrapper[7620]: I0318 09:01:30.481320 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:30.481433 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:30.481433 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:30.481433 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:30.481433 master-0 kubenswrapper[7620]: I0318 09:01:30.481412 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:31.246311 master-0 kubenswrapper[7620]: E0318 09:01:31.246214 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:01:31.482995 master-0 kubenswrapper[7620]: I0318 09:01:31.482836 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:31.482995 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:31.482995 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:31.482995 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:31.482995 master-0 kubenswrapper[7620]: I0318 09:01:31.482957 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:31.585215 master-0 kubenswrapper[7620]: I0318 09:01:31.585060 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:01:31.585215 master-0 kubenswrapper[7620]: I0318 09:01:31.585115 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:01:32.224483 master-0 kubenswrapper[7620]: I0318 09:01:32.224427 7620 scope.go:117] "RemoveContainer" containerID="ba57860bb4615dc613e8795f7f3436663ef867da7a5a525958b65d7222c4b23f" Mar 18 09:01:32.482900 master-0 kubenswrapper[7620]: I0318 09:01:32.482651 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:32.482900 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:32.482900 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:32.482900 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:32.482900 master-0 kubenswrapper[7620]: I0318 09:01:32.482773 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:32.596025 master-0 kubenswrapper[7620]: I0318 09:01:32.595830 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/1.log" Mar 18 09:01:32.596765 master-0 kubenswrapper[7620]: I0318 09:01:32.596688 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" event={"ID":"97730ec2-e6f1-4f8c-b85c-3c10623d06ce","Type":"ContainerStarted","Data":"06a87992f2b33f44d20bf40458a6331a7591c2c0c3c1b6cd6a68f8f0a04bcade"} Mar 18 09:01:33.301066 master-0 kubenswrapper[7620]: E0318 09:01:33.300988 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:33.481711 master-0 kubenswrapper[7620]: I0318 09:01:33.481627 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:33.481711 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:33.481711 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:33.481711 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:33.482012 master-0 kubenswrapper[7620]: I0318 09:01:33.481727 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:33.605621 master-0 kubenswrapper[7620]: I0318 09:01:33.605401 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:01:33.605621 master-0 kubenswrapper[7620]: I0318 09:01:33.605464 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:01:34.482676 master-0 kubenswrapper[7620]: I0318 09:01:34.482578 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:34.482676 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:34.482676 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:34.482676 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:34.483730 master-0 kubenswrapper[7620]: I0318 09:01:34.482681 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:35.482886 master-0 kubenswrapper[7620]: I0318 09:01:35.482781 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:35.482886 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:35.482886 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:35.482886 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:35.483611 master-0 kubenswrapper[7620]: I0318 09:01:35.482931 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:36.225034 master-0 kubenswrapper[7620]: I0318 09:01:36.224952 7620 scope.go:117] "RemoveContainer" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" Mar 18 09:01:36.225677 master-0 kubenswrapper[7620]: E0318 09:01:36.225450 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 09:01:36.482593 master-0 kubenswrapper[7620]: I0318 09:01:36.482435 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:36.482593 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:36.482593 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:36.482593 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:36.482593 master-0 kubenswrapper[7620]: I0318 09:01:36.482534 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:37.482958 master-0 kubenswrapper[7620]: I0318 09:01:37.482785 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:37.482958 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:37.482958 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:37.482958 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:37.482958 master-0 kubenswrapper[7620]: I0318 09:01:37.482953 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:38.482041 master-0 kubenswrapper[7620]: I0318 09:01:38.481955 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:38.482041 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:38.482041 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:38.482041 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:38.482041 master-0 kubenswrapper[7620]: I0318 09:01:38.482032 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:39.483809 master-0 kubenswrapper[7620]: I0318 09:01:39.483753 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:39.483809 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:39.483809 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:39.483809 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:39.484817 master-0 kubenswrapper[7620]: I0318 09:01:39.484779 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:40.482464 master-0 kubenswrapper[7620]: I0318 09:01:40.482397 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:40.482464 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:40.482464 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:40.482464 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:40.483074 master-0 kubenswrapper[7620]: I0318 09:01:40.483041 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:41.481948 master-0 kubenswrapper[7620]: I0318 09:01:41.481808 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:41.481948 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:41.481948 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:41.481948 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:41.482591 master-0 kubenswrapper[7620]: I0318 09:01:41.481941 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:42.482457 master-0 kubenswrapper[7620]: I0318 09:01:42.482329 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:42.482457 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:42.482457 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:42.482457 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:42.482457 master-0 kubenswrapper[7620]: I0318 09:01:42.482451 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:42.681787 master-0 kubenswrapper[7620]: I0318 09:01:42.681725 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/3.log" Mar 18 09:01:42.684284 master-0 kubenswrapper[7620]: I0318 09:01:42.684249 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/kube-controller-manager/0.log" Mar 18 09:01:42.684620 master-0 kubenswrapper[7620]: I0318 09:01:42.684584 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="25aa8e7a5fe1cd4cb308d45095cfc8ec891476603ff1037e70498c15fb355808" exitCode=1 Mar 18 09:01:42.684909 master-0 kubenswrapper[7620]: I0318 09:01:42.684732 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerDied","Data":"25aa8e7a5fe1cd4cb308d45095cfc8ec891476603ff1037e70498c15fb355808"} Mar 18 09:01:43.483097 master-0 kubenswrapper[7620]: I0318 09:01:43.483009 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:43.483097 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:43.483097 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:43.483097 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:43.483966 master-0 kubenswrapper[7620]: I0318 09:01:43.483155 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:44.482192 master-0 kubenswrapper[7620]: I0318 09:01:44.482108 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:44.482192 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:44.482192 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:44.482192 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:44.482192 master-0 kubenswrapper[7620]: I0318 09:01:44.482194 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:45.481881 master-0 kubenswrapper[7620]: I0318 09:01:45.481756 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:45.481881 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:45.481881 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:45.481881 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:45.481881 master-0 kubenswrapper[7620]: I0318 09:01:45.481844 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:45.876023 master-0 kubenswrapper[7620]: E0318 09:01:45.875845 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:01:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:01:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:01:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:01:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:01:46.253487 master-0 kubenswrapper[7620]: I0318 09:01:46.253379 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 09:01:46.253487 master-0 kubenswrapper[7620]: I0318 09:01:46.253411 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 09:01:46.253487 master-0 kubenswrapper[7620]: I0318 09:01:46.253478 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 09:01:46.253939 master-0 kubenswrapper[7620]: I0318 09:01:46.253557 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 09:01:46.480990 master-0 kubenswrapper[7620]: I0318 09:01:46.480912 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:46.480990 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:46.480990 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:46.480990 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:46.481351 master-0 kubenswrapper[7620]: I0318 09:01:46.481022 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:47.481604 master-0 kubenswrapper[7620]: I0318 09:01:47.481526 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:47.481604 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:47.481604 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:47.481604 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:47.481604 master-0 kubenswrapper[7620]: I0318 09:01:47.481597 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:48.482328 master-0 kubenswrapper[7620]: I0318 09:01:48.482254 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:48.482328 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:48.482328 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:48.482328 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:48.483103 master-0 kubenswrapper[7620]: I0318 09:01:48.482378 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:49.224087 master-0 kubenswrapper[7620]: I0318 09:01:49.223976 7620 scope.go:117] "RemoveContainer" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" Mar 18 09:01:49.224451 master-0 kubenswrapper[7620]: E0318 09:01:49.224396 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 09:01:49.481415 master-0 kubenswrapper[7620]: I0318 09:01:49.481242 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:49.481415 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:49.481415 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:49.481415 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:49.481719 master-0 kubenswrapper[7620]: I0318 09:01:49.481396 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:50.482524 master-0 kubenswrapper[7620]: I0318 09:01:50.482426 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:50.482524 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:50.482524 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:50.482524 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:50.483513 master-0 kubenswrapper[7620]: I0318 09:01:50.482525 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:51.482488 master-0 kubenswrapper[7620]: I0318 09:01:51.482380 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:51.482488 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:51.482488 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:51.482488 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:51.482488 master-0 kubenswrapper[7620]: I0318 09:01:51.482483 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:52.482159 master-0 kubenswrapper[7620]: I0318 09:01:52.482092 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:52.482159 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:52.482159 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:52.482159 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:52.482828 master-0 kubenswrapper[7620]: I0318 09:01:52.482781 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:53.482686 master-0 kubenswrapper[7620]: I0318 09:01:53.482594 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:53.482686 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:53.482686 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:53.482686 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:53.483308 master-0 kubenswrapper[7620]: I0318 09:01:53.482730 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:54.482541 master-0 kubenswrapper[7620]: I0318 09:01:54.482453 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:54.482541 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:54.482541 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:54.482541 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:54.483990 master-0 kubenswrapper[7620]: I0318 09:01:54.483935 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:54.788898 master-0 kubenswrapper[7620]: I0318 09:01:54.788803 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/3.log" Mar 18 09:01:54.789794 master-0 kubenswrapper[7620]: I0318 09:01:54.789757 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/2.log" Mar 18 09:01:54.789936 master-0 kubenswrapper[7620]: I0318 09:01:54.789831 7620 generic.go:334] "Generic (PLEG): container finished" podID="29ba6765-61c9-4f78-8f44-570418000c5c" containerID="eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067" exitCode=1 Mar 18 09:01:54.789983 master-0 kubenswrapper[7620]: I0318 09:01:54.789957 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerDied","Data":"eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067"} Mar 18 09:01:54.790041 master-0 kubenswrapper[7620]: I0318 09:01:54.790016 7620 scope.go:117] "RemoveContainer" containerID="515ed36be86ba90059406282f67d51d2a047f6233d3a9d4c91573ed7eff6be87" Mar 18 09:01:54.790719 master-0 kubenswrapper[7620]: I0318 09:01:54.790690 7620 scope.go:117] "RemoveContainer" containerID="eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067" Mar 18 09:01:54.791018 master-0 kubenswrapper[7620]: E0318 09:01:54.790987 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-khm5n_openshift-cluster-storage-operator(29ba6765-61c9-4f78-8f44-570418000c5c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" podUID="29ba6765-61c9-4f78-8f44-570418000c5c" Mar 18 09:01:54.807697 master-0 kubenswrapper[7620]: E0318 09:01:54.807536 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-66b84d69b-7h94d.189de38f6458fbb9 openshift-ingress-operator 12420 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-66b84d69b-7h94d,UID:94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9,APIVersion:v1,ResourceVersion:3642,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:53:53 +0000 UTC,LastTimestamp:2026-03-18 08:58:35.818060106 +0000 UTC m=+579.812841888,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:01:55.482795 master-0 kubenswrapper[7620]: I0318 09:01:55.482680 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:55.482795 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:55.482795 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:55.482795 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:55.482795 master-0 kubenswrapper[7620]: I0318 09:01:55.482792 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:55.814079 master-0 kubenswrapper[7620]: I0318 09:01:55.813987 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/3.log" Mar 18 09:01:55.877232 master-0 kubenswrapper[7620]: E0318 09:01:55.877105 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:01:56.253580 master-0 kubenswrapper[7620]: I0318 09:01:56.253524 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 09:01:56.253950 master-0 kubenswrapper[7620]: I0318 09:01:56.253920 7620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 09:01:56.254118 master-0 kubenswrapper[7620]: I0318 09:01:56.254076 7620 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 09:01:56.254176 master-0 kubenswrapper[7620]: I0318 09:01:56.254147 7620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 09:01:56.483000 master-0 kubenswrapper[7620]: I0318 09:01:56.482828 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:56.483000 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:56.483000 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:56.483000 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:56.484274 master-0 kubenswrapper[7620]: I0318 09:01:56.484049 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:57.482961 master-0 kubenswrapper[7620]: I0318 09:01:57.482878 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:57.482961 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:57.482961 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:57.482961 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:57.484030 master-0 kubenswrapper[7620]: I0318 09:01:57.482980 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:58.481473 master-0 kubenswrapper[7620]: I0318 09:01:58.481388 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:58.481473 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:58.481473 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:58.481473 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:58.481957 master-0 kubenswrapper[7620]: I0318 09:01:58.481497 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:59.482578 master-0 kubenswrapper[7620]: I0318 09:01:59.482495 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:59.482578 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:01:59.482578 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:01:59.482578 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:01:59.483372 master-0 kubenswrapper[7620]: I0318 09:01:59.482585 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:00.481881 master-0 kubenswrapper[7620]: I0318 09:02:00.481743 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:00.481881 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:00.481881 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:00.481881 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:00.482335 master-0 kubenswrapper[7620]: I0318 09:02:00.481936 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:01.481913 master-0 kubenswrapper[7620]: I0318 09:02:01.481791 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:01.481913 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:01.481913 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:01.481913 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:01.481913 master-0 kubenswrapper[7620]: I0318 09:02:01.481901 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:02.225006 master-0 kubenswrapper[7620]: I0318 09:02:02.224552 7620 scope.go:117] "RemoveContainer" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" Mar 18 09:02:02.225265 master-0 kubenswrapper[7620]: E0318 09:02:02.225102 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 09:02:02.483901 master-0 kubenswrapper[7620]: I0318 09:02:02.483749 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:02.483901 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:02.483901 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:02.483901 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:02.485077 master-0 kubenswrapper[7620]: I0318 09:02:02.483925 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:03.482884 master-0 kubenswrapper[7620]: I0318 09:02:03.482755 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:03.482884 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:03.482884 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:03.482884 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:03.483504 master-0 kubenswrapper[7620]: I0318 09:02:03.482920 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:04.484109 master-0 kubenswrapper[7620]: I0318 09:02:04.484028 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:04.484109 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:04.484109 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:04.484109 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:04.484109 master-0 kubenswrapper[7620]: I0318 09:02:04.484099 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:05.482172 master-0 kubenswrapper[7620]: I0318 09:02:05.482100 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:05.482172 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:05.482172 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:05.482172 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:05.482914 master-0 kubenswrapper[7620]: I0318 09:02:05.482842 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:05.588740 master-0 kubenswrapper[7620]: E0318 09:02:05.588659 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:05.878011 master-0 kubenswrapper[7620]: E0318 09:02:05.877922 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:05.909635 master-0 kubenswrapper[7620]: I0318 09:02:05.909534 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:02:05.909635 master-0 kubenswrapper[7620]: I0318 09:02:05.909606 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:02:06.240334 master-0 kubenswrapper[7620]: I0318 09:02:06.240233 7620 status_manager.go:851] "Failed to get status for pod" podUID="16d633c5-e0aa-4fb6-83e0-a2e976334406" pod="openshift-network-node-identity/network-node-identity-n5vqx" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-n5vqx)" Mar 18 09:02:06.252518 master-0 kubenswrapper[7620]: I0318 09:02:06.252455 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:06.252623 master-0 kubenswrapper[7620]: I0318 09:02:06.252546 7620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:06.252623 master-0 kubenswrapper[7620]: I0318 09:02:06.252584 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:06.483995 master-0 kubenswrapper[7620]: I0318 09:02:06.483802 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:06.483995 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:06.483995 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:06.483995 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:06.483995 master-0 kubenswrapper[7620]: I0318 09:02:06.483964 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:07.225239 master-0 kubenswrapper[7620]: I0318 09:02:07.225133 7620 scope.go:117] "RemoveContainer" containerID="eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067" Mar 18 09:02:07.226249 master-0 kubenswrapper[7620]: E0318 09:02:07.225463 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-khm5n_openshift-cluster-storage-operator(29ba6765-61c9-4f78-8f44-570418000c5c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" podUID="29ba6765-61c9-4f78-8f44-570418000c5c" Mar 18 09:02:07.482916 master-0 kubenswrapper[7620]: I0318 09:02:07.482688 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:07.482916 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:07.482916 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:07.482916 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:07.482916 master-0 kubenswrapper[7620]: I0318 09:02:07.482795 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:07.608707 master-0 kubenswrapper[7620]: E0318 09:02:07.608624 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:02:08.481941 master-0 kubenswrapper[7620]: I0318 09:02:08.481836 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:08.481941 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:08.481941 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:08.481941 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:08.482820 master-0 kubenswrapper[7620]: I0318 09:02:08.481968 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:09.483611 master-0 kubenswrapper[7620]: I0318 09:02:09.483396 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:09.483611 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:09.483611 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:09.483611 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:09.483611 master-0 kubenswrapper[7620]: I0318 09:02:09.483565 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:10.482249 master-0 kubenswrapper[7620]: I0318 09:02:10.482106 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:10.482249 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:10.482249 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:10.482249 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:10.482647 master-0 kubenswrapper[7620]: I0318 09:02:10.482291 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:11.483711 master-0 kubenswrapper[7620]: I0318 09:02:11.483600 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:11.483711 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:11.483711 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:11.483711 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:11.485042 master-0 kubenswrapper[7620]: I0318 09:02:11.483718 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:12.483408 master-0 kubenswrapper[7620]: I0318 09:02:12.483295 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:12.483408 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:12.483408 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:12.483408 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:12.484554 master-0 kubenswrapper[7620]: I0318 09:02:12.483442 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:13.483625 master-0 kubenswrapper[7620]: I0318 09:02:13.483526 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:13.483625 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:13.483625 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:13.483625 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:13.484691 master-0 kubenswrapper[7620]: I0318 09:02:13.483638 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:14.481770 master-0 kubenswrapper[7620]: I0318 09:02:14.481659 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:14.481770 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:14.481770 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:14.481770 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:14.481770 master-0 kubenswrapper[7620]: I0318 09:02:14.481752 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:15.224491 master-0 kubenswrapper[7620]: I0318 09:02:15.224412 7620 scope.go:117] "RemoveContainer" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" Mar 18 09:02:15.225346 master-0 kubenswrapper[7620]: E0318 09:02:15.224949 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 09:02:15.482670 master-0 kubenswrapper[7620]: I0318 09:02:15.482532 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:15.482670 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:15.482670 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:15.482670 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:15.482670 master-0 kubenswrapper[7620]: I0318 09:02:15.482632 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:15.879743 master-0 kubenswrapper[7620]: E0318 09:02:15.879633 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:16.481995 master-0 kubenswrapper[7620]: I0318 09:02:16.481906 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:16.481995 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:16.481995 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:16.481995 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:16.482990 master-0 kubenswrapper[7620]: I0318 09:02:16.482005 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:17.483769 master-0 kubenswrapper[7620]: I0318 09:02:17.483641 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:17.483769 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:17.483769 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:17.483769 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:17.485018 master-0 kubenswrapper[7620]: I0318 09:02:17.483806 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:18.481962 master-0 kubenswrapper[7620]: I0318 09:02:18.481898 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:18.481962 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:18.481962 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:18.481962 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:18.482346 master-0 kubenswrapper[7620]: I0318 09:02:18.482005 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:19.482646 master-0 kubenswrapper[7620]: I0318 09:02:19.482560 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:19.482646 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:19.482646 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:19.482646 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:19.484119 master-0 kubenswrapper[7620]: I0318 09:02:19.484062 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:20.483379 master-0 kubenswrapper[7620]: I0318 09:02:20.483239 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:20.483379 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:20.483379 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:20.483379 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:20.484671 master-0 kubenswrapper[7620]: I0318 09:02:20.483365 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:21.483267 master-0 kubenswrapper[7620]: I0318 09:02:21.483174 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:21.483267 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:21.483267 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:21.483267 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:21.485163 master-0 kubenswrapper[7620]: I0318 09:02:21.483288 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:22.224947 master-0 kubenswrapper[7620]: I0318 09:02:22.224825 7620 scope.go:117] "RemoveContainer" containerID="eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067" Mar 18 09:02:22.225254 master-0 kubenswrapper[7620]: E0318 09:02:22.225185 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-khm5n_openshift-cluster-storage-operator(29ba6765-61c9-4f78-8f44-570418000c5c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" podUID="29ba6765-61c9-4f78-8f44-570418000c5c" Mar 18 09:02:22.482158 master-0 kubenswrapper[7620]: I0318 09:02:22.481954 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:22.482158 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:22.482158 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:22.482158 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:22.482158 master-0 kubenswrapper[7620]: I0318 09:02:22.482066 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:23.482578 master-0 kubenswrapper[7620]: I0318 09:02:23.482464 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:23.482578 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:23.482578 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:23.482578 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:23.483384 master-0 kubenswrapper[7620]: I0318 09:02:23.482605 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:24.482021 master-0 kubenswrapper[7620]: I0318 09:02:24.481931 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:24.482021 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:24.482021 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:24.482021 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:24.482021 master-0 kubenswrapper[7620]: I0318 09:02:24.481997 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:25.482709 master-0 kubenswrapper[7620]: I0318 09:02:25.482648 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:25.482709 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:25.482709 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:25.482709 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:25.483898 master-0 kubenswrapper[7620]: I0318 09:02:25.483816 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:25.880607 master-0 kubenswrapper[7620]: E0318 09:02:25.880519 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:25.880607 master-0 kubenswrapper[7620]: E0318 09:02:25.880590 7620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 09:02:26.481702 master-0 kubenswrapper[7620]: I0318 09:02:26.481594 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:26.481702 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:26.481702 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:26.481702 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:26.482153 master-0 kubenswrapper[7620]: I0318 09:02:26.481739 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:27.225257 master-0 kubenswrapper[7620]: I0318 09:02:27.225172 7620 scope.go:117] "RemoveContainer" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" Mar 18 09:02:27.226254 master-0 kubenswrapper[7620]: E0318 09:02:27.225602 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 09:02:27.482723 master-0 kubenswrapper[7620]: I0318 09:02:27.482460 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:27.482723 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:27.482723 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:27.482723 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:27.482723 master-0 kubenswrapper[7620]: I0318 09:02:27.482555 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:28.484074 master-0 kubenswrapper[7620]: I0318 09:02:28.483938 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:28.484074 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:28.484074 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:28.484074 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:28.484074 master-0 kubenswrapper[7620]: I0318 09:02:28.484046 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:28.810517 master-0 kubenswrapper[7620]: E0318 09:02:28.810266 7620 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189de3d14a4ff636 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:c229b92d307e46237f6273edcc98d387,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:58:36.28826783 +0000 UTC m=+580.283049622,LastTimestamp:2026-03-18 08:58:36.28826783 +0000 UTC m=+580.283049622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:02:29.483525 master-0 kubenswrapper[7620]: I0318 09:02:29.483409 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:29.483525 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:29.483525 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:29.483525 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:29.484022 master-0 kubenswrapper[7620]: I0318 09:02:29.483543 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:30.483314 master-0 kubenswrapper[7620]: I0318 09:02:30.483207 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:30.483314 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:30.483314 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:30.483314 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:30.484139 master-0 kubenswrapper[7620]: I0318 09:02:30.483338 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:31.483277 master-0 kubenswrapper[7620]: I0318 09:02:31.483195 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:31.483277 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:31.483277 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:31.483277 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:31.484600 master-0 kubenswrapper[7620]: I0318 09:02:31.484547 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:32.483997 master-0 kubenswrapper[7620]: I0318 09:02:32.483232 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:32.483997 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:32.483997 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:32.483997 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:32.485723 master-0 kubenswrapper[7620]: I0318 09:02:32.484020 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:33.482409 master-0 kubenswrapper[7620]: I0318 09:02:33.482322 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:33.482409 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:33.482409 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:33.482409 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:33.482953 master-0 kubenswrapper[7620]: I0318 09:02:33.482427 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:34.225265 master-0 kubenswrapper[7620]: I0318 09:02:34.225196 7620 scope.go:117] "RemoveContainer" containerID="eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067" Mar 18 09:02:34.226425 master-0 kubenswrapper[7620]: E0318 09:02:34.226354 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-khm5n_openshift-cluster-storage-operator(29ba6765-61c9-4f78-8f44-570418000c5c)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" podUID="29ba6765-61c9-4f78-8f44-570418000c5c" Mar 18 09:02:34.481280 master-0 kubenswrapper[7620]: I0318 09:02:34.481062 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:34.481280 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:34.481280 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:34.481280 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:34.481280 master-0 kubenswrapper[7620]: I0318 09:02:34.481196 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:35.482236 master-0 kubenswrapper[7620]: I0318 09:02:35.482115 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:35.482236 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:35.482236 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:35.482236 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:35.483578 master-0 kubenswrapper[7620]: I0318 09:02:35.482246 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:36.482503 master-0 kubenswrapper[7620]: I0318 09:02:36.482394 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:36.482503 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:36.482503 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:36.482503 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:36.482503 master-0 kubenswrapper[7620]: I0318 09:02:36.482502 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:37.482453 master-0 kubenswrapper[7620]: I0318 09:02:37.482376 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:37.482453 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:37.482453 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:37.482453 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:37.483453 master-0 kubenswrapper[7620]: I0318 09:02:37.482511 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:38.225304 master-0 kubenswrapper[7620]: I0318 09:02:38.225225 7620 scope.go:117] "RemoveContainer" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" Mar 18 09:02:38.225800 master-0 kubenswrapper[7620]: E0318 09:02:38.225572 7620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-7h94d_openshift-ingress-operator(94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" podUID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" Mar 18 09:02:38.482083 master-0 kubenswrapper[7620]: I0318 09:02:38.481918 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:38.482083 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:38.482083 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:38.482083 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:38.482083 master-0 kubenswrapper[7620]: I0318 09:02:38.482020 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:39.483013 master-0 kubenswrapper[7620]: I0318 09:02:39.482932 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:39.483013 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:39.483013 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:39.483013 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:39.484068 master-0 kubenswrapper[7620]: I0318 09:02:39.483040 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:39.912915 master-0 kubenswrapper[7620]: E0318 09:02:39.912820 7620 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:39.913366 master-0 kubenswrapper[7620]: I0318 09:02:39.913320 7620 scope.go:117] "RemoveContainer" containerID="25aa8e7a5fe1cd4cb308d45095cfc8ec891476603ff1037e70498c15fb355808" Mar 18 09:02:40.223888 master-0 kubenswrapper[7620]: I0318 09:02:40.223805 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/3.log" Mar 18 09:02:40.225637 master-0 kubenswrapper[7620]: I0318 09:02:40.225613 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/kube-controller-manager/0.log" Mar 18 09:02:40.226083 master-0 kubenswrapper[7620]: I0318 09:02:40.226017 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:02:40.226083 master-0 kubenswrapper[7620]: I0318 09:02:40.226038 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:02:40.231302 master-0 kubenswrapper[7620]: I0318 09:02:40.231252 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c229b92d307e46237f6273edcc98d387","Type":"ContainerStarted","Data":"83c47aaabc2b561d44e630d0889d72720d976ad68c17142beae85f320c2926a1"} Mar 18 09:02:40.481010 master-0 kubenswrapper[7620]: I0318 09:02:40.480837 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:40.481010 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:40.481010 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:40.481010 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:40.481010 master-0 kubenswrapper[7620]: I0318 09:02:40.480945 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:41.483229 master-0 kubenswrapper[7620]: I0318 09:02:41.483158 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:41.483229 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:41.483229 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:41.483229 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:41.483820 master-0 kubenswrapper[7620]: I0318 09:02:41.483229 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:42.485036 master-0 kubenswrapper[7620]: I0318 09:02:42.484957 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:42.485036 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:42.485036 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:42.485036 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:42.486066 master-0 kubenswrapper[7620]: I0318 09:02:42.485089 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:43.483116 master-0 kubenswrapper[7620]: I0318 09:02:43.482910 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:43.483116 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:43.483116 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:43.483116 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:43.483116 master-0 kubenswrapper[7620]: I0318 09:02:43.483022 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:44.482728 master-0 kubenswrapper[7620]: I0318 09:02:44.482632 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:44.482728 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:44.482728 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:44.482728 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:44.484010 master-0 kubenswrapper[7620]: I0318 09:02:44.482754 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:45.482203 master-0 kubenswrapper[7620]: I0318 09:02:45.482106 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:45.482203 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:45.482203 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:45.482203 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:45.482561 master-0 kubenswrapper[7620]: I0318 09:02:45.482244 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:46.074529 master-0 kubenswrapper[7620]: E0318 09:02:46.074408 7620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:02:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:02:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:02:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:02:36Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:46.252780 master-0 kubenswrapper[7620]: I0318 09:02:46.252719 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:46.252780 master-0 kubenswrapper[7620]: I0318 09:02:46.252772 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:46.267530 master-0 kubenswrapper[7620]: I0318 09:02:46.267447 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:46.497919 master-0 kubenswrapper[7620]: I0318 09:02:46.497811 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:46.497919 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:46.497919 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:46.497919 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:46.498196 master-0 kubenswrapper[7620]: I0318 09:02:46.497952 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:47.482757 master-0 kubenswrapper[7620]: I0318 09:02:47.482598 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:47.482757 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:47.482757 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:47.482757 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:47.482757 master-0 kubenswrapper[7620]: I0318 09:02:47.482727 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:48.225958 master-0 kubenswrapper[7620]: I0318 09:02:48.223988 7620 scope.go:117] "RemoveContainer" containerID="eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067" Mar 18 09:02:48.483010 master-0 kubenswrapper[7620]: I0318 09:02:48.482797 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:48.483010 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:48.483010 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:48.483010 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:48.483802 master-0 kubenswrapper[7620]: I0318 09:02:48.483762 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:49.303452 master-0 kubenswrapper[7620]: I0318 09:02:49.303396 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/3.log" Mar 18 09:02:49.303986 master-0 kubenswrapper[7620]: I0318 09:02:49.303843 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" event={"ID":"29ba6765-61c9-4f78-8f44-570418000c5c","Type":"ContainerStarted","Data":"9baee5ff4228e28ac078f6e9227047fd75b7a8b25b75fb8138ac5756a3bb414f"} Mar 18 09:02:49.483153 master-0 kubenswrapper[7620]: I0318 09:02:49.483029 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:49.483153 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:49.483153 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:49.483153 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:49.483153 master-0 kubenswrapper[7620]: I0318 09:02:49.483142 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:50.225328 master-0 kubenswrapper[7620]: I0318 09:02:50.225265 7620 scope.go:117] "RemoveContainer" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" Mar 18 09:02:50.482274 master-0 kubenswrapper[7620]: I0318 09:02:50.482061 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:50.482274 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:50.482274 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:50.482274 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:50.482274 master-0 kubenswrapper[7620]: I0318 09:02:50.482166 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:51.325808 master-0 kubenswrapper[7620]: I0318 09:02:51.325752 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/4.log" Mar 18 09:02:51.327248 master-0 kubenswrapper[7620]: I0318 09:02:51.327200 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" event={"ID":"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9","Type":"ContainerStarted","Data":"a1130207ab8ca367b6e63551a4bb5be6325dd36d7f7c1fe111a9533f6258e508"} Mar 18 09:02:51.482523 master-0 kubenswrapper[7620]: I0318 09:02:51.482413 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:51.482523 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:51.482523 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:51.482523 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:51.482992 master-0 kubenswrapper[7620]: I0318 09:02:51.482527 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:51.736410 master-0 kubenswrapper[7620]: I0318 09:02:51.736350 7620 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:51.748980 master-0 kubenswrapper[7620]: I0318 09:02:51.748871 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:02:51.767021 master-0 kubenswrapper[7620]: I0318 09:02:51.766957 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 09:02:51.767458 master-0 kubenswrapper[7620]: E0318 09:02:51.767425 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005a0b4c-8e2d-4483-98e9-55badf7099c5" containerName="installer" Mar 18 09:02:51.767523 master-0 kubenswrapper[7620]: I0318 09:02:51.767458 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="005a0b4c-8e2d-4483-98e9-55badf7099c5" containerName="installer" Mar 18 09:02:51.767523 master-0 kubenswrapper[7620]: E0318 09:02:51.767494 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e6caf3-d855-483c-a37d-1010e522580e" containerName="installer" Mar 18 09:02:51.767523 master-0 kubenswrapper[7620]: I0318 09:02:51.767509 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e6caf3-d855-483c-a37d-1010e522580e" containerName="installer" Mar 18 09:02:51.767758 master-0 kubenswrapper[7620]: I0318 09:02:51.767730 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="005a0b4c-8e2d-4483-98e9-55badf7099c5" containerName="installer" Mar 18 09:02:51.767831 master-0 kubenswrapper[7620]: I0318 09:02:51.767764 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e6caf3-d855-483c-a37d-1010e522580e" containerName="installer" Mar 18 09:02:51.768555 master-0 kubenswrapper[7620]: I0318 09:02:51.768520 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.773382 master-0 kubenswrapper[7620]: I0318 09:02:51.773316 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:02:51.774743 master-0 kubenswrapper[7620]: I0318 09:02:51.774709 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.782950 master-0 kubenswrapper[7620]: I0318 09:02:51.775400 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-6hjtj" Mar 18 09:02:51.782950 master-0 kubenswrapper[7620]: I0318 09:02:51.777033 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-j24rr" Mar 18 09:02:51.782950 master-0 kubenswrapper[7620]: I0318 09:02:51.777677 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 09:02:51.782950 master-0 kubenswrapper[7620]: I0318 09:02:51.777902 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 09:02:51.788540 master-0 kubenswrapper[7620]: I0318 09:02:51.788482 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:02:51.801969 master-0 kubenswrapper[7620]: I0318 09:02:51.801591 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 09:02:51.801969 master-0 kubenswrapper[7620]: I0318 09:02:51.801833 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-var-lock\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.801969 master-0 kubenswrapper[7620]: I0318 09:02:51.801917 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.801969 master-0 kubenswrapper[7620]: I0318 09:02:51.801945 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.801969 master-0 kubenswrapper[7620]: I0318 09:02:51.801981 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.802419 master-0 kubenswrapper[7620]: I0318 09:02:51.802025 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6416cc-c8e8-4410-b3a4-059cbae52318-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.802419 master-0 kubenswrapper[7620]: I0318 09:02:51.802074 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/413623e6-3a24-40ab-a29e-50d81460ac59-kube-api-access\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.814388 master-0 kubenswrapper[7620]: I0318 09:02:51.814313 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 09:02:51.817964 master-0 kubenswrapper[7620]: I0318 09:02:51.817916 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 09:02:51.822669 master-0 kubenswrapper[7620]: I0318 09:02:51.822529 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:02:51.825324 master-0 kubenswrapper[7620]: I0318 09:02:51.825274 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:02:51.862424 master-0 kubenswrapper[7620]: I0318 09:02:51.862163 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=0.862139448 podStartE2EDuration="862.139448ms" podCreationTimestamp="2026-03-18 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:02:51.858339097 +0000 UTC m=+835.853120859" watchObservedRunningTime="2026-03-18 09:02:51.862139448 +0000 UTC m=+835.856921200" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905264 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-var-lock\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905331 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905357 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905383 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905419 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6416cc-c8e8-4410-b3a4-059cbae52318-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905461 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/413623e6-3a24-40ab-a29e-50d81460ac59-kube-api-access\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905838 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-var-lock\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905900 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905929 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.910910 master-0 kubenswrapper[7620]: I0318 09:02:51.905954 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:51.926011 master-0 kubenswrapper[7620]: I0318 09:02:51.925624 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/413623e6-3a24-40ab-a29e-50d81460ac59-kube-api-access\") pod \"installer-5-master-0\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:51.930094 master-0 kubenswrapper[7620]: I0318 09:02:51.930045 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6416cc-c8e8-4410-b3a4-059cbae52318-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:52.117758 master-0 kubenswrapper[7620]: I0318 09:02:52.117607 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:02:52.141535 master-0 kubenswrapper[7620]: I0318 09:02:52.141445 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:02:52.243885 master-0 kubenswrapper[7620]: I0318 09:02:52.242457 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e6caf3-d855-483c-a37d-1010e522580e" path="/var/lib/kubelet/pods/68e6caf3-d855-483c-a37d-1010e522580e/volumes" Mar 18 09:02:52.333686 master-0 kubenswrapper[7620]: I0318 09:02:52.333624 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:02:52.333686 master-0 kubenswrapper[7620]: I0318 09:02:52.333674 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0c3df70-7d0a-4a46-a625-20553ab284d0" Mar 18 09:02:52.338001 master-0 kubenswrapper[7620]: I0318 09:02:52.337963 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:02:52.505885 master-0 kubenswrapper[7620]: I0318 09:02:52.505130 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:52.505885 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:52.505885 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:52.505885 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:52.505885 master-0 kubenswrapper[7620]: I0318 09:02:52.505202 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:52.581940 master-0 kubenswrapper[7620]: I0318 09:02:52.581899 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 09:02:52.638252 master-0 kubenswrapper[7620]: I0318 09:02:52.638204 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:02:52.645707 master-0 kubenswrapper[7620]: W0318 09:02:52.645654 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod413623e6_3a24_40ab_a29e_50d81460ac59.slice/crio-5a6cbcd0fa2790b1e8ee2482363ca9ea6eb143ed0218070ab5add49f6480f124 WatchSource:0}: Error finding container 5a6cbcd0fa2790b1e8ee2482363ca9ea6eb143ed0218070ab5add49f6480f124: Status 404 returned error can't find the container with id 5a6cbcd0fa2790b1e8ee2482363ca9ea6eb143ed0218070ab5add49f6480f124 Mar 18 09:02:53.342916 master-0 kubenswrapper[7620]: I0318 09:02:53.342800 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"413623e6-3a24-40ab-a29e-50d81460ac59","Type":"ContainerStarted","Data":"5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051"} Mar 18 09:02:53.342916 master-0 kubenswrapper[7620]: I0318 09:02:53.342896 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"413623e6-3a24-40ab-a29e-50d81460ac59","Type":"ContainerStarted","Data":"5a6cbcd0fa2790b1e8ee2482363ca9ea6eb143ed0218070ab5add49f6480f124"} Mar 18 09:02:53.345761 master-0 kubenswrapper[7620]: I0318 09:02:53.345693 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bf6416cc-c8e8-4410-b3a4-059cbae52318","Type":"ContainerStarted","Data":"a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7"} Mar 18 09:02:53.345949 master-0 kubenswrapper[7620]: I0318 09:02:53.345772 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bf6416cc-c8e8-4410-b3a4-059cbae52318","Type":"ContainerStarted","Data":"8dfccb5df026a842d6730584e021a0974d4c32cdf41d8385c44ddfa1757664a3"} Mar 18 09:02:53.365434 master-0 kubenswrapper[7620]: I0318 09:02:53.365339 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.365316405 podStartE2EDuration="2.365316405s" podCreationTimestamp="2026-03-18 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:02:53.362695505 +0000 UTC m=+837.357477287" watchObservedRunningTime="2026-03-18 09:02:53.365316405 +0000 UTC m=+837.360098187" Mar 18 09:02:53.395322 master-0 kubenswrapper[7620]: I0318 09:02:53.395210 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.395183371 podStartE2EDuration="2.395183371s" podCreationTimestamp="2026-03-18 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:02:53.392105109 +0000 UTC m=+837.386886871" watchObservedRunningTime="2026-03-18 09:02:53.395183371 +0000 UTC m=+837.389965133" Mar 18 09:02:53.481520 master-0 kubenswrapper[7620]: I0318 09:02:53.481444 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:53.481520 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:53.481520 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:53.481520 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:53.481520 master-0 kubenswrapper[7620]: I0318 09:02:53.481514 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:54.481182 master-0 kubenswrapper[7620]: I0318 09:02:54.481119 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:54.481182 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:54.481182 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:54.481182 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:54.481880 master-0 kubenswrapper[7620]: I0318 09:02:54.481207 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:54.686943 master-0 kubenswrapper[7620]: I0318 09:02:54.684107 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 09:02:55.367206 master-0 kubenswrapper[7620]: I0318 09:02:55.367046 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="bf6416cc-c8e8-4410-b3a4-059cbae52318" containerName="installer" containerID="cri-o://a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7" gracePeriod=30 Mar 18 09:02:55.481933 master-0 kubenswrapper[7620]: I0318 09:02:55.481807 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:55.481933 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:55.481933 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:55.481933 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:55.483168 master-0 kubenswrapper[7620]: I0318 09:02:55.481963 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:56.481758 master-0 kubenswrapper[7620]: I0318 09:02:56.481683 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:56.481758 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:56.481758 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:56.481758 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:56.482467 master-0 kubenswrapper[7620]: I0318 09:02:56.481773 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:57.482014 master-0 kubenswrapper[7620]: I0318 09:02:57.481953 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:57.482014 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:57.482014 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:57.482014 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:57.483025 master-0 kubenswrapper[7620]: I0318 09:02:57.482025 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:58.481563 master-0 kubenswrapper[7620]: I0318 09:02:58.481491 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:58.481563 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:58.481563 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:58.481563 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:58.482068 master-0 kubenswrapper[7620]: I0318 09:02:58.481570 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:58.689045 master-0 kubenswrapper[7620]: I0318 09:02:58.688951 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:02:58.691062 master-0 kubenswrapper[7620]: I0318 09:02:58.691009 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.703369 master-0 kubenswrapper[7620]: I0318 09:02:58.703279 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:02:58.848210 master-0 kubenswrapper[7620]: I0318 09:02:58.848133 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-var-lock\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.848210 master-0 kubenswrapper[7620]: I0318 09:02:58.848195 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aaa22a87-c335-44b4-9ac7-ca3950b73051-kube-api-access\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.848210 master-0 kubenswrapper[7620]: I0318 09:02:58.848228 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.950547 master-0 kubenswrapper[7620]: I0318 09:02:58.950452 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-var-lock\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.950547 master-0 kubenswrapper[7620]: I0318 09:02:58.950554 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aaa22a87-c335-44b4-9ac7-ca3950b73051-kube-api-access\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.951018 master-0 kubenswrapper[7620]: I0318 09:02:58.950623 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.951018 master-0 kubenswrapper[7620]: I0318 09:02:58.950689 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-var-lock\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.951018 master-0 kubenswrapper[7620]: I0318 09:02:58.950820 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:58.984131 master-0 kubenswrapper[7620]: I0318 09:02:58.984057 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aaa22a87-c335-44b4-9ac7-ca3950b73051-kube-api-access\") pod \"installer-2-master-0\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:59.044415 master-0 kubenswrapper[7620]: I0318 09:02:59.044305 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:02:59.505023 master-0 kubenswrapper[7620]: I0318 09:02:59.504937 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:59.505023 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:02:59.505023 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:02:59.505023 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:02:59.507593 master-0 kubenswrapper[7620]: I0318 09:02:59.505070 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:59.569749 master-0 kubenswrapper[7620]: I0318 09:02:59.569690 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:03:00.418510 master-0 kubenswrapper[7620]: I0318 09:03:00.418312 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"aaa22a87-c335-44b4-9ac7-ca3950b73051","Type":"ContainerStarted","Data":"3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84"} Mar 18 09:03:00.418510 master-0 kubenswrapper[7620]: I0318 09:03:00.418406 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"aaa22a87-c335-44b4-9ac7-ca3950b73051","Type":"ContainerStarted","Data":"bf60e231f9820cef3fddc5cfff553adf8aba71848c7e1fa5ad63253a445667eb"} Mar 18 09:03:00.450737 master-0 kubenswrapper[7620]: I0318 09:03:00.450625 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.450590938 podStartE2EDuration="2.450590938s" podCreationTimestamp="2026-03-18 09:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:00.439972575 +0000 UTC m=+844.434754417" watchObservedRunningTime="2026-03-18 09:03:00.450590938 +0000 UTC m=+844.445372730" Mar 18 09:03:00.482320 master-0 kubenswrapper[7620]: I0318 09:03:00.482203 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:00.482320 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:00.482320 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:00.482320 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:00.482791 master-0 kubenswrapper[7620]: I0318 09:03:00.482459 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:01.484733 master-0 kubenswrapper[7620]: I0318 09:03:01.484680 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:01.484733 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:01.484733 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:01.484733 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:01.485414 master-0 kubenswrapper[7620]: I0318 09:03:01.484747 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:02.481242 master-0 kubenswrapper[7620]: I0318 09:03:02.481174 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:02.481242 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:02.481242 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:02.481242 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:02.481779 master-0 kubenswrapper[7620]: I0318 09:03:02.481275 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:03.482196 master-0 kubenswrapper[7620]: I0318 09:03:03.482129 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:03.482196 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:03.482196 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:03.482196 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:03.483243 master-0 kubenswrapper[7620]: I0318 09:03:03.482209 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:04.481963 master-0 kubenswrapper[7620]: I0318 09:03:04.481882 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:04.481963 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:04.481963 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:04.481963 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:04.482576 master-0 kubenswrapper[7620]: I0318 09:03:04.481981 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:05.481329 master-0 kubenswrapper[7620]: I0318 09:03:05.481263 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:05.481329 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:05.481329 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:05.481329 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:05.481329 master-0 kubenswrapper[7620]: I0318 09:03:05.481348 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:06.480896 master-0 kubenswrapper[7620]: I0318 09:03:06.480803 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:06.480896 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:06.480896 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:06.480896 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:06.481880 master-0 kubenswrapper[7620]: I0318 09:03:06.480920 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:07.482560 master-0 kubenswrapper[7620]: I0318 09:03:07.482438 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:07.482560 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:07.482560 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:07.482560 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:07.482560 master-0 kubenswrapper[7620]: I0318 09:03:07.482534 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:07.826556 master-0 kubenswrapper[7620]: I0318 09:03:07.825518 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:03:07.826556 master-0 kubenswrapper[7620]: I0318 09:03:07.826103 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-5-master-0" podUID="413623e6-3a24-40ab-a29e-50d81460ac59" containerName="installer" containerID="cri-o://5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051" gracePeriod=30 Mar 18 09:03:08.483774 master-0 kubenswrapper[7620]: I0318 09:03:08.483598 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:08.483774 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:08.483774 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:08.483774 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:08.485080 master-0 kubenswrapper[7620]: I0318 09:03:08.483799 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:08.860093 master-0 kubenswrapper[7620]: I0318 09:03:08.859883 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 09:03:08.861451 master-0 kubenswrapper[7620]: I0318 09:03:08.861411 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:08.864087 master-0 kubenswrapper[7620]: I0318 09:03:08.864018 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 09:03:08.864593 master-0 kubenswrapper[7620]: I0318 09:03:08.864566 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v6k2v" Mar 18 09:03:08.877467 master-0 kubenswrapper[7620]: I0318 09:03:08.877392 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 09:03:08.962263 master-0 kubenswrapper[7620]: I0318 09:03:08.962156 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:08.962263 master-0 kubenswrapper[7620]: I0318 09:03:08.962260 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:08.962732 master-0 kubenswrapper[7620]: I0318 09:03:08.962656 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-var-lock\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:09.063696 master-0 kubenswrapper[7620]: I0318 09:03:09.063605 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:09.064052 master-0 kubenswrapper[7620]: I0318 09:03:09.063808 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-var-lock\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:09.064113 master-0 kubenswrapper[7620]: I0318 09:03:09.064071 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:09.064160 master-0 kubenswrapper[7620]: I0318 09:03:09.064120 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-var-lock\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:09.064249 master-0 kubenswrapper[7620]: I0318 09:03:09.064193 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:09.097709 master-0 kubenswrapper[7620]: I0318 09:03:09.097608 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:09.202880 master-0 kubenswrapper[7620]: I0318 09:03:09.202585 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:09.487593 master-0 kubenswrapper[7620]: I0318 09:03:09.487463 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:09.487593 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:09.487593 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:09.487593 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:09.488895 master-0 kubenswrapper[7620]: I0318 09:03:09.488716 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:09.752915 master-0 kubenswrapper[7620]: I0318 09:03:09.752846 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 09:03:09.757467 master-0 kubenswrapper[7620]: W0318 09:03:09.757354 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3068e569_5a4e_4fc3_88f4_5684d093c8e6.slice/crio-564cb8426369721ba7067b6ba1d2db58be0d2b7219cd8ee2b9c066b14b29b589 WatchSource:0}: Error finding container 564cb8426369721ba7067b6ba1d2db58be0d2b7219cd8ee2b9c066b14b29b589: Status 404 returned error can't find the container with id 564cb8426369721ba7067b6ba1d2db58be0d2b7219cd8ee2b9c066b14b29b589 Mar 18 09:03:10.038170 master-0 kubenswrapper[7620]: I0318 09:03:10.034966 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 18 09:03:10.038170 master-0 kubenswrapper[7620]: I0318 09:03:10.035939 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:10.045333 master-0 kubenswrapper[7620]: I0318 09:03:10.045263 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 18 09:03:10.095683 master-0 kubenswrapper[7620]: I0318 09:03:10.095594 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:10.096066 master-0 kubenswrapper[7620]: I0318 09:03:10.095752 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:10.197139 master-0 kubenswrapper[7620]: I0318 09:03:10.197073 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:10.197487 master-0 kubenswrapper[7620]: I0318 09:03:10.197395 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:10.197562 master-0 kubenswrapper[7620]: I0318 09:03:10.197508 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:10.213298 master-0 kubenswrapper[7620]: I0318 09:03:10.213204 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:10.250546 master-0 kubenswrapper[7620]: I0318 09:03:10.249971 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 18 09:03:10.256824 master-0 kubenswrapper[7620]: I0318 09:03:10.256773 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.274256 master-0 kubenswrapper[7620]: I0318 09:03:10.274154 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 18 09:03:10.300775 master-0 kubenswrapper[7620]: I0318 09:03:10.300127 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-var-lock\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.300775 master-0 kubenswrapper[7620]: I0318 09:03:10.300227 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.300775 master-0 kubenswrapper[7620]: I0318 09:03:10.300291 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kube-api-access\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.402460 master-0 kubenswrapper[7620]: I0318 09:03:10.402400 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kube-api-access\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.402992 master-0 kubenswrapper[7620]: I0318 09:03:10.402968 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-var-lock\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.403113 master-0 kubenswrapper[7620]: I0318 09:03:10.403080 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-var-lock\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.403220 master-0 kubenswrapper[7620]: I0318 09:03:10.403202 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.403314 master-0 kubenswrapper[7620]: I0318 09:03:10.403289 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.424454 master-0 kubenswrapper[7620]: I0318 09:03:10.424422 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kube-api-access\") pod \"installer-6-master-0\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.451806 master-0 kubenswrapper[7620]: I0318 09:03:10.451749 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:10.484457 master-0 kubenswrapper[7620]: I0318 09:03:10.483383 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:10.484457 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:10.484457 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:10.484457 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:10.484457 master-0 kubenswrapper[7620]: I0318 09:03:10.483488 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:10.484457 master-0 kubenswrapper[7620]: I0318 09:03:10.483575 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:03:10.485012 master-0 kubenswrapper[7620]: I0318 09:03:10.484556 7620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"a4436209a1c80a403c36e67bb8b4310cdae3c04ffc3d3675bb5372419c24b948"} pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" containerMessage="Container router failed startup probe, will be restarted" Mar 18 09:03:10.485012 master-0 kubenswrapper[7620]: I0318 09:03:10.484624 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" containerID="cri-o://a4436209a1c80a403c36e67bb8b4310cdae3c04ffc3d3675bb5372419c24b948" gracePeriod=3600 Mar 18 09:03:10.508807 master-0 kubenswrapper[7620]: I0318 09:03:10.508732 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3068e569-5a4e-4fc3-88f4-5684d093c8e6","Type":"ContainerStarted","Data":"54302cdad4a743df0858f296cab89bada38f903f22c51e9048d06d7146e16775"} Mar 18 09:03:10.508807 master-0 kubenswrapper[7620]: I0318 09:03:10.508819 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3068e569-5a4e-4fc3-88f4-5684d093c8e6","Type":"ContainerStarted","Data":"564cb8426369721ba7067b6ba1d2db58be0d2b7219cd8ee2b9c066b14b29b589"} Mar 18 09:03:10.541845 master-0 kubenswrapper[7620]: I0318 09:03:10.541702 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.54167343 podStartE2EDuration="2.54167343s" podCreationTimestamp="2026-03-18 09:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:10.533577384 +0000 UTC m=+854.528359176" watchObservedRunningTime="2026-03-18 09:03:10.54167343 +0000 UTC m=+854.536455222" Mar 18 09:03:10.589883 master-0 kubenswrapper[7620]: I0318 09:03:10.589095 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:03:10.907352 master-0 kubenswrapper[7620]: I0318 09:03:10.907295 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 18 09:03:10.913689 master-0 kubenswrapper[7620]: W0318 09:03:10.913414 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0fcf6eb0_f4dd_41dd_86ee_9bcb9996546d.slice/crio-441a9e7e4c0388348d1f8c78cfabb9e80774ef9142ffdc40381f1188cdfe4527 WatchSource:0}: Error finding container 441a9e7e4c0388348d1f8c78cfabb9e80774ef9142ffdc40381f1188cdfe4527: Status 404 returned error can't find the container with id 441a9e7e4c0388348d1f8c78cfabb9e80774ef9142ffdc40381f1188cdfe4527 Mar 18 09:03:11.039433 master-0 kubenswrapper[7620]: I0318 09:03:11.039357 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 18 09:03:11.039965 master-0 kubenswrapper[7620]: W0318 09:03:11.039935 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbfb95119_ed96_428c_8a9b_7e29f48b5d4b.slice/crio-898f7ad0780d754bd2a9eb084988e2a8df18f477faf934c2f22dfd1716e45de9 WatchSource:0}: Error finding container 898f7ad0780d754bd2a9eb084988e2a8df18f477faf934c2f22dfd1716e45de9: Status 404 returned error can't find the container with id 898f7ad0780d754bd2a9eb084988e2a8df18f477faf934c2f22dfd1716e45de9 Mar 18 09:03:11.520952 master-0 kubenswrapper[7620]: I0318 09:03:11.520798 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d","Type":"ContainerStarted","Data":"4831da6b8225ffa3b61ecb0f1ce7047144ac489e1e26b31e6165fbfd478f3144"} Mar 18 09:03:11.522024 master-0 kubenswrapper[7620]: I0318 09:03:11.521988 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d","Type":"ContainerStarted","Data":"441a9e7e4c0388348d1f8c78cfabb9e80774ef9142ffdc40381f1188cdfe4527"} Mar 18 09:03:11.522659 master-0 kubenswrapper[7620]: I0318 09:03:11.522628 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"bfb95119-ed96-428c-8a9b-7e29f48b5d4b","Type":"ContainerStarted","Data":"44961de8599bb63e15f17ececbcbbdf128ff00606cbb65189b93cdcbe9f41ba2"} Mar 18 09:03:11.522827 master-0 kubenswrapper[7620]: I0318 09:03:11.522801 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"bfb95119-ed96-428c-8a9b-7e29f48b5d4b","Type":"ContainerStarted","Data":"898f7ad0780d754bd2a9eb084988e2a8df18f477faf934c2f22dfd1716e45de9"} Mar 18 09:03:11.571230 master-0 kubenswrapper[7620]: I0318 09:03:11.570943 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-0" podStartSLOduration=1.5708995799999999 podStartE2EDuration="1.57089958s" podCreationTimestamp="2026-03-18 09:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:11.546297394 +0000 UTC m=+855.541079156" watchObservedRunningTime="2026-03-18 09:03:11.57089958 +0000 UTC m=+855.565681382" Mar 18 09:03:12.534608 master-0 kubenswrapper[7620]: I0318 09:03:12.534351 7620 generic.go:334] "Generic (PLEG): container finished" podID="0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" containerID="4831da6b8225ffa3b61ecb0f1ce7047144ac489e1e26b31e6165fbfd478f3144" exitCode=0 Mar 18 09:03:12.534608 master-0 kubenswrapper[7620]: I0318 09:03:12.534422 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d","Type":"ContainerDied","Data":"4831da6b8225ffa3b61ecb0f1ce7047144ac489e1e26b31e6165fbfd478f3144"} Mar 18 09:03:12.561437 master-0 kubenswrapper[7620]: I0318 09:03:12.561288 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-0" podStartSLOduration=2.561260644 podStartE2EDuration="2.561260644s" podCreationTimestamp="2026-03-18 09:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:11.568059264 +0000 UTC m=+855.562841096" watchObservedRunningTime="2026-03-18 09:03:12.561260644 +0000 UTC m=+856.556042426" Mar 18 09:03:13.921552 master-0 kubenswrapper[7620]: I0318 09:03:13.921464 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:13.967023 master-0 kubenswrapper[7620]: I0318 09:03:13.966794 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kube-api-access\") pod \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\" (UID: \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\") " Mar 18 09:03:13.967023 master-0 kubenswrapper[7620]: I0318 09:03:13.966954 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kubelet-dir\") pod \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\" (UID: \"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d\") " Mar 18 09:03:13.967459 master-0 kubenswrapper[7620]: I0318 09:03:13.967069 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" (UID: "0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:13.967630 master-0 kubenswrapper[7620]: I0318 09:03:13.967588 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:13.969861 master-0 kubenswrapper[7620]: I0318 09:03:13.969781 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" (UID: "0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:03:14.069431 master-0 kubenswrapper[7620]: I0318 09:03:14.069313 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:14.551725 master-0 kubenswrapper[7620]: I0318 09:03:14.550194 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d","Type":"ContainerDied","Data":"441a9e7e4c0388348d1f8c78cfabb9e80774ef9142ffdc40381f1188cdfe4527"} Mar 18 09:03:14.551725 master-0 kubenswrapper[7620]: I0318 09:03:14.550245 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="441a9e7e4c0388348d1f8c78cfabb9e80774ef9142ffdc40381f1188cdfe4527" Mar 18 09:03:14.551725 master-0 kubenswrapper[7620]: I0318 09:03:14.550311 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:03:22.878286 master-0 kubenswrapper[7620]: I0318 09:03:22.878205 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:03:22.879908 master-0 kubenswrapper[7620]: I0318 09:03:22.878460 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="aaa22a87-c335-44b4-9ac7-ca3950b73051" containerName="installer" containerID="cri-o://3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84" gracePeriod=30 Mar 18 09:03:23.308381 master-0 kubenswrapper[7620]: I0318 09:03:23.308295 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_aaa22a87-c335-44b4-9ac7-ca3950b73051/installer/0.log" Mar 18 09:03:23.308717 master-0 kubenswrapper[7620]: I0318 09:03:23.308434 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:03:23.321949 master-0 kubenswrapper[7620]: I0318 09:03:23.319582 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-var-lock\") pod \"aaa22a87-c335-44b4-9ac7-ca3950b73051\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " Mar 18 09:03:23.321949 master-0 kubenswrapper[7620]: I0318 09:03:23.319699 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-var-lock" (OuterVolumeSpecName: "var-lock") pod "aaa22a87-c335-44b4-9ac7-ca3950b73051" (UID: "aaa22a87-c335-44b4-9ac7-ca3950b73051"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:23.321949 master-0 kubenswrapper[7620]: I0318 09:03:23.319871 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-kubelet-dir\") pod \"aaa22a87-c335-44b4-9ac7-ca3950b73051\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " Mar 18 09:03:23.321949 master-0 kubenswrapper[7620]: I0318 09:03:23.319962 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aaa22a87-c335-44b4-9ac7-ca3950b73051-kube-api-access\") pod \"aaa22a87-c335-44b4-9ac7-ca3950b73051\" (UID: \"aaa22a87-c335-44b4-9ac7-ca3950b73051\") " Mar 18 09:03:23.321949 master-0 kubenswrapper[7620]: I0318 09:03:23.320046 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aaa22a87-c335-44b4-9ac7-ca3950b73051" (UID: "aaa22a87-c335-44b4-9ac7-ca3950b73051"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:23.321949 master-0 kubenswrapper[7620]: I0318 09:03:23.320815 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:23.321949 master-0 kubenswrapper[7620]: I0318 09:03:23.320875 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aaa22a87-c335-44b4-9ac7-ca3950b73051-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:23.327227 master-0 kubenswrapper[7620]: I0318 09:03:23.327163 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa22a87-c335-44b4-9ac7-ca3950b73051-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aaa22a87-c335-44b4-9ac7-ca3950b73051" (UID: "aaa22a87-c335-44b4-9ac7-ca3950b73051"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:03:23.422687 master-0 kubenswrapper[7620]: I0318 09:03:23.422607 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aaa22a87-c335-44b4-9ac7-ca3950b73051-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:23.632657 master-0 kubenswrapper[7620]: I0318 09:03:23.632542 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_aaa22a87-c335-44b4-9ac7-ca3950b73051/installer/0.log" Mar 18 09:03:23.632657 master-0 kubenswrapper[7620]: I0318 09:03:23.632598 7620 generic.go:334] "Generic (PLEG): container finished" podID="aaa22a87-c335-44b4-9ac7-ca3950b73051" containerID="3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84" exitCode=1 Mar 18 09:03:23.632657 master-0 kubenswrapper[7620]: I0318 09:03:23.632630 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"aaa22a87-c335-44b4-9ac7-ca3950b73051","Type":"ContainerDied","Data":"3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84"} Mar 18 09:03:23.632657 master-0 kubenswrapper[7620]: I0318 09:03:23.632658 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"aaa22a87-c335-44b4-9ac7-ca3950b73051","Type":"ContainerDied","Data":"bf60e231f9820cef3fddc5cfff553adf8aba71848c7e1fa5ad63253a445667eb"} Mar 18 09:03:23.632997 master-0 kubenswrapper[7620]: I0318 09:03:23.632679 7620 scope.go:117] "RemoveContainer" containerID="3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84" Mar 18 09:03:23.632997 master-0 kubenswrapper[7620]: I0318 09:03:23.632707 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:03:23.649068 master-0 kubenswrapper[7620]: I0318 09:03:23.649033 7620 scope.go:117] "RemoveContainer" containerID="3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84" Mar 18 09:03:23.649525 master-0 kubenswrapper[7620]: E0318 09:03:23.649500 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84\": container with ID starting with 3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84 not found: ID does not exist" containerID="3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84" Mar 18 09:03:23.649575 master-0 kubenswrapper[7620]: I0318 09:03:23.649535 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84"} err="failed to get container status \"3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84\": rpc error: code = NotFound desc = could not find container \"3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84\": container with ID starting with 3a499d029033082ed55907fa1ab183ba0ca048dfcf81859f7cf8f3841abe4c84 not found: ID does not exist" Mar 18 09:03:23.687922 master-0 kubenswrapper[7620]: I0318 09:03:23.687844 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:03:23.696908 master-0 kubenswrapper[7620]: I0318 09:03:23.696824 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:03:24.218976 master-0 kubenswrapper[7620]: I0318 09:03:24.218915 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_413623e6-3a24-40ab-a29e-50d81460ac59/installer/0.log" Mar 18 09:03:24.219556 master-0 kubenswrapper[7620]: I0318 09:03:24.219023 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:03:24.235933 master-0 kubenswrapper[7620]: I0318 09:03:24.235833 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-kubelet-dir\") pod \"413623e6-3a24-40ab-a29e-50d81460ac59\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " Mar 18 09:03:24.235933 master-0 kubenswrapper[7620]: I0318 09:03:24.235932 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/413623e6-3a24-40ab-a29e-50d81460ac59-kube-api-access\") pod \"413623e6-3a24-40ab-a29e-50d81460ac59\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " Mar 18 09:03:24.238651 master-0 kubenswrapper[7620]: I0318 09:03:24.236150 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-var-lock\") pod \"413623e6-3a24-40ab-a29e-50d81460ac59\" (UID: \"413623e6-3a24-40ab-a29e-50d81460ac59\") " Mar 18 09:03:24.240468 master-0 kubenswrapper[7620]: I0318 09:03:24.240378 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa22a87-c335-44b4-9ac7-ca3950b73051" path="/var/lib/kubelet/pods/aaa22a87-c335-44b4-9ac7-ca3950b73051/volumes" Mar 18 09:03:24.242006 master-0 kubenswrapper[7620]: I0318 09:03:24.241164 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-var-lock" (OuterVolumeSpecName: "var-lock") pod "413623e6-3a24-40ab-a29e-50d81460ac59" (UID: "413623e6-3a24-40ab-a29e-50d81460ac59"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:24.242006 master-0 kubenswrapper[7620]: I0318 09:03:24.241353 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "413623e6-3a24-40ab-a29e-50d81460ac59" (UID: "413623e6-3a24-40ab-a29e-50d81460ac59"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:24.244907 master-0 kubenswrapper[7620]: I0318 09:03:24.244812 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413623e6-3a24-40ab-a29e-50d81460ac59-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "413623e6-3a24-40ab-a29e-50d81460ac59" (UID: "413623e6-3a24-40ab-a29e-50d81460ac59"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:03:24.310817 master-0 kubenswrapper[7620]: I0318 09:03:24.310734 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_bf6416cc-c8e8-4410-b3a4-059cbae52318/installer/0.log" Mar 18 09:03:24.311059 master-0 kubenswrapper[7620]: I0318 09:03:24.310925 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:03:24.337487 master-0 kubenswrapper[7620]: I0318 09:03:24.337430 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:24.337487 master-0 kubenswrapper[7620]: I0318 09:03:24.337478 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/413623e6-3a24-40ab-a29e-50d81460ac59-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:24.337487 master-0 kubenswrapper[7620]: I0318 09:03:24.337496 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/413623e6-3a24-40ab-a29e-50d81460ac59-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:24.439112 master-0 kubenswrapper[7620]: I0318 09:03:24.438948 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-var-lock\") pod \"bf6416cc-c8e8-4410-b3a4-059cbae52318\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " Mar 18 09:03:24.439112 master-0 kubenswrapper[7620]: I0318 09:03:24.439055 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6416cc-c8e8-4410-b3a4-059cbae52318-kube-api-access\") pod \"bf6416cc-c8e8-4410-b3a4-059cbae52318\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " Mar 18 09:03:24.439112 master-0 kubenswrapper[7620]: I0318 09:03:24.439076 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-kubelet-dir\") pod \"bf6416cc-c8e8-4410-b3a4-059cbae52318\" (UID: \"bf6416cc-c8e8-4410-b3a4-059cbae52318\") " Mar 18 09:03:24.439112 master-0 kubenswrapper[7620]: I0318 09:03:24.439081 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf6416cc-c8e8-4410-b3a4-059cbae52318" (UID: "bf6416cc-c8e8-4410-b3a4-059cbae52318"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:24.439462 master-0 kubenswrapper[7620]: I0318 09:03:24.439263 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf6416cc-c8e8-4410-b3a4-059cbae52318" (UID: "bf6416cc-c8e8-4410-b3a4-059cbae52318"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:24.439462 master-0 kubenswrapper[7620]: I0318 09:03:24.439396 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:24.439462 master-0 kubenswrapper[7620]: I0318 09:03:24.439413 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf6416cc-c8e8-4410-b3a4-059cbae52318-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:24.444141 master-0 kubenswrapper[7620]: I0318 09:03:24.443919 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6416cc-c8e8-4410-b3a4-059cbae52318-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf6416cc-c8e8-4410-b3a4-059cbae52318" (UID: "bf6416cc-c8e8-4410-b3a4-059cbae52318"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:03:24.541130 master-0 kubenswrapper[7620]: I0318 09:03:24.541050 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf6416cc-c8e8-4410-b3a4-059cbae52318-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:24.648939 master-0 kubenswrapper[7620]: I0318 09:03:24.648201 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_413623e6-3a24-40ab-a29e-50d81460ac59/installer/0.log" Mar 18 09:03:24.648939 master-0 kubenswrapper[7620]: I0318 09:03:24.648292 7620 generic.go:334] "Generic (PLEG): container finished" podID="413623e6-3a24-40ab-a29e-50d81460ac59" containerID="5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051" exitCode=1 Mar 18 09:03:24.648939 master-0 kubenswrapper[7620]: I0318 09:03:24.648381 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"413623e6-3a24-40ab-a29e-50d81460ac59","Type":"ContainerDied","Data":"5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051"} Mar 18 09:03:24.648939 master-0 kubenswrapper[7620]: I0318 09:03:24.648473 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"413623e6-3a24-40ab-a29e-50d81460ac59","Type":"ContainerDied","Data":"5a6cbcd0fa2790b1e8ee2482363ca9ea6eb143ed0218070ab5add49f6480f124"} Mar 18 09:03:24.648939 master-0 kubenswrapper[7620]: I0318 09:03:24.648506 7620 scope.go:117] "RemoveContainer" containerID="5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051" Mar 18 09:03:24.648939 master-0 kubenswrapper[7620]: I0318 09:03:24.648643 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:03:24.655192 master-0 kubenswrapper[7620]: I0318 09:03:24.655144 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_bf6416cc-c8e8-4410-b3a4-059cbae52318/installer/0.log" Mar 18 09:03:24.655286 master-0 kubenswrapper[7620]: I0318 09:03:24.655220 7620 generic.go:334] "Generic (PLEG): container finished" podID="bf6416cc-c8e8-4410-b3a4-059cbae52318" containerID="a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7" exitCode=1 Mar 18 09:03:24.655340 master-0 kubenswrapper[7620]: I0318 09:03:24.655308 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:03:24.655442 master-0 kubenswrapper[7620]: I0318 09:03:24.655284 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bf6416cc-c8e8-4410-b3a4-059cbae52318","Type":"ContainerDied","Data":"a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7"} Mar 18 09:03:24.655511 master-0 kubenswrapper[7620]: I0318 09:03:24.655459 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bf6416cc-c8e8-4410-b3a4-059cbae52318","Type":"ContainerDied","Data":"8dfccb5df026a842d6730584e021a0974d4c32cdf41d8385c44ddfa1757664a3"} Mar 18 09:03:24.674371 master-0 kubenswrapper[7620]: I0318 09:03:24.674333 7620 scope.go:117] "RemoveContainer" containerID="5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051" Mar 18 09:03:24.674905 master-0 kubenswrapper[7620]: E0318 09:03:24.674845 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051\": container with ID starting with 5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051 not found: ID does not exist" containerID="5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051" Mar 18 09:03:24.674976 master-0 kubenswrapper[7620]: I0318 09:03:24.674914 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051"} err="failed to get container status \"5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051\": rpc error: code = NotFound desc = could not find container \"5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051\": container with ID starting with 5ba86e745cd745b1e728d992504dc4a54d5125fe186e32984eafba07ece2c051 not found: ID does not exist" Mar 18 09:03:24.674976 master-0 kubenswrapper[7620]: I0318 09:03:24.674948 7620 scope.go:117] "RemoveContainer" containerID="a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7" Mar 18 09:03:24.713017 master-0 kubenswrapper[7620]: I0318 09:03:24.711402 7620 scope.go:117] "RemoveContainer" containerID="a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7" Mar 18 09:03:24.714747 master-0 kubenswrapper[7620]: E0318 09:03:24.714673 7620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7\": container with ID starting with a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7 not found: ID does not exist" containerID="a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7" Mar 18 09:03:24.714905 master-0 kubenswrapper[7620]: I0318 09:03:24.714773 7620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7"} err="failed to get container status \"a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7\": rpc error: code = NotFound desc = could not find container \"a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7\": container with ID starting with a5d3e293f0e07f0caa47f3f9d63b14bb2abfe09f9120da1a6bb52790dd03eff7 not found: ID does not exist" Mar 18 09:03:24.716580 master-0 kubenswrapper[7620]: I0318 09:03:24.716507 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 09:03:24.720057 master-0 kubenswrapper[7620]: I0318 09:03:24.719994 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 09:03:24.735453 master-0 kubenswrapper[7620]: I0318 09:03:24.735330 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:03:24.739404 master-0 kubenswrapper[7620]: I0318 09:03:24.739347 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:03:26.232048 master-0 kubenswrapper[7620]: I0318 09:03:26.231939 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:03:26.232048 master-0 kubenswrapper[7620]: I0318 09:03:26.232017 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:03:26.238613 master-0 kubenswrapper[7620]: I0318 09:03:26.238516 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413623e6-3a24-40ab-a29e-50d81460ac59" path="/var/lib/kubelet/pods/413623e6-3a24-40ab-a29e-50d81460ac59/volumes" Mar 18 09:03:26.239650 master-0 kubenswrapper[7620]: I0318 09:03:26.239603 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6416cc-c8e8-4410-b3a4-059cbae52318" path="/var/lib/kubelet/pods/bf6416cc-c8e8-4410-b3a4-059cbae52318/volumes" Mar 18 09:03:26.262901 master-0 kubenswrapper[7620]: I0318 09:03:26.260513 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 09:03:26.265926 master-0 kubenswrapper[7620]: I0318 09:03:26.264304 7620 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 18 09:03:26.273841 master-0 kubenswrapper[7620]: I0318 09:03:26.273139 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 09:03:26.307496 master-0 kubenswrapper[7620]: I0318 09:03:26.307410 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 09:03:26.678443 master-0 kubenswrapper[7620]: I0318 09:03:26.678381 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:03:26.678443 master-0 kubenswrapper[7620]: I0318 09:03:26.678432 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="ffe89c95-d4e9-4b8d-ae76-37d7bef448df" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.086591 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: E0318 09:03:27.087054 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" containerName="pruner" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.087079 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" containerName="pruner" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: E0318 09:03:27.087135 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413623e6-3a24-40ab-a29e-50d81460ac59" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.087148 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="413623e6-3a24-40ab-a29e-50d81460ac59" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: E0318 09:03:27.087206 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6416cc-c8e8-4410-b3a4-059cbae52318" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.087219 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6416cc-c8e8-4410-b3a4-059cbae52318" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: E0318 09:03:27.087246 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa22a87-c335-44b4-9ac7-ca3950b73051" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.087258 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa22a87-c335-44b4-9ac7-ca3950b73051" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.087528 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6416cc-c8e8-4410-b3a4-059cbae52318" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.087610 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" containerName="pruner" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.087957 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa22a87-c335-44b4-9ac7-ca3950b73051" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.088002 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="413623e6-3a24-40ab-a29e-50d81460ac59" containerName="installer" Mar 18 09:03:27.091900 master-0 kubenswrapper[7620]: I0318 09:03:27.089809 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.096217 master-0 kubenswrapper[7620]: I0318 09:03:27.094278 7620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-6hjtj" Mar 18 09:03:27.098096 master-0 kubenswrapper[7620]: I0318 09:03:27.098033 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:03:27.099092 master-0 kubenswrapper[7620]: I0318 09:03:27.099050 7620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 09:03:27.162903 master-0 kubenswrapper[7620]: I0318 09:03:27.160924 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.160906348 podStartE2EDuration="1.160906348s" podCreationTimestamp="2026-03-18 09:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:27.159581192 +0000 UTC m=+871.154362964" watchObservedRunningTime="2026-03-18 09:03:27.160906348 +0000 UTC m=+871.155688100" Mar 18 09:03:27.187979 master-0 kubenswrapper[7620]: I0318 09:03:27.187876 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.188324 master-0 kubenswrapper[7620]: I0318 09:03:27.188021 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.188324 master-0 kubenswrapper[7620]: I0318 09:03:27.188064 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.290271 master-0 kubenswrapper[7620]: I0318 09:03:27.290140 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.290271 master-0 kubenswrapper[7620]: I0318 09:03:27.290272 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.291203 master-0 kubenswrapper[7620]: I0318 09:03:27.290318 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.291831 master-0 kubenswrapper[7620]: I0318 09:03:27.291524 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.292024 master-0 kubenswrapper[7620]: I0318 09:03:27.291920 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.307803 master-0 kubenswrapper[7620]: I0318 09:03:27.307723 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.414621 master-0 kubenswrapper[7620]: I0318 09:03:27.414461 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:03:27.860915 master-0 kubenswrapper[7620]: I0318 09:03:27.860810 7620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:03:27.872448 master-0 kubenswrapper[7620]: W0318 09:03:27.872345 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode0d127be_2d13_449b_915b_2d49052baf02.slice/crio-548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb WatchSource:0}: Error finding container 548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb: Status 404 returned error can't find the container with id 548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb Mar 18 09:03:28.694784 master-0 kubenswrapper[7620]: I0318 09:03:28.694726 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"e0d127be-2d13-449b-915b-2d49052baf02","Type":"ContainerStarted","Data":"d6df90fd64794ccde6d9875bd568053d6569144302c72ab9173cf35f762dfd22"} Mar 18 09:03:28.695405 master-0 kubenswrapper[7620]: I0318 09:03:28.694814 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"e0d127be-2d13-449b-915b-2d49052baf02","Type":"ContainerStarted","Data":"548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb"} Mar 18 09:03:28.713167 master-0 kubenswrapper[7620]: I0318 09:03:28.713047 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=1.71300842 podStartE2EDuration="1.71300842s" podCreationTimestamp="2026-03-18 09:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:28.712771004 +0000 UTC m=+872.707552766" watchObservedRunningTime="2026-03-18 09:03:28.71300842 +0000 UTC m=+872.707790222" Mar 18 09:03:42.978077 master-0 kubenswrapper[7620]: I0318 09:03:42.977931 7620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:03:42.979046 master-0 kubenswrapper[7620]: I0318 09:03:42.978441 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://5c751dbb03b0e78f3ed7a9a2441228c32321443d29de48b1bf17ef0e83072bd3" gracePeriod=30 Mar 18 09:03:42.979046 master-0 kubenswrapper[7620]: I0318 09:03:42.978603 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://9e36a51bcf12ae7db2a94f2fd56063ee6085dd854239e6802000e5e8cda9a85b" gracePeriod=30 Mar 18 09:03:42.979046 master-0 kubenswrapper[7620]: I0318 09:03:42.978628 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" containerID="cri-o://20f67081f1a83df8fa8825fe68b2011f445e7f6dd6a012bd23cbd198b1272dee" gracePeriod=30 Mar 18 09:03:42.979046 master-0 kubenswrapper[7620]: I0318 09:03:42.978632 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" containerID="cri-o://83c47aaabc2b561d44e630d0889d72720d976ad68c17142beae85f320c2926a1" gracePeriod=30 Mar 18 09:03:42.980743 master-0 kubenswrapper[7620]: I0318 09:03:42.980704 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:03:42.981232 master-0 kubenswrapper[7620]: E0318 09:03:42.981200 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981302 master-0 kubenswrapper[7620]: I0318 09:03:42.981235 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981302 master-0 kubenswrapper[7620]: E0318 09:03:42.981266 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981302 master-0 kubenswrapper[7620]: I0318 09:03:42.981279 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981390 master-0 kubenswrapper[7620]: E0318 09:03:42.981319 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981390 master-0 kubenswrapper[7620]: I0318 09:03:42.981333 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981390 master-0 kubenswrapper[7620]: E0318 09:03:42.981356 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981390 master-0 kubenswrapper[7620]: I0318 09:03:42.981369 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981501 master-0 kubenswrapper[7620]: E0318 09:03:42.981419 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-cert-syncer" Mar 18 09:03:42.981501 master-0 kubenswrapper[7620]: I0318 09:03:42.981433 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-cert-syncer" Mar 18 09:03:42.981501 master-0 kubenswrapper[7620]: E0318 09:03:42.981462 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-recovery-controller" Mar 18 09:03:42.981501 master-0 kubenswrapper[7620]: I0318 09:03:42.981475 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-recovery-controller" Mar 18 09:03:42.981501 master-0 kubenswrapper[7620]: E0318 09:03:42.981500 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:03:42.981637 master-0 kubenswrapper[7620]: I0318 09:03:42.981516 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:03:42.981637 master-0 kubenswrapper[7620]: E0318 09:03:42.981544 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:03:42.981637 master-0 kubenswrapper[7620]: I0318 09:03:42.981556 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:03:42.981789 master-0 kubenswrapper[7620]: I0318 09:03:42.981760 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981834 master-0 kubenswrapper[7620]: I0318 09:03:42.981802 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:03:42.981834 master-0 kubenswrapper[7620]: I0318 09:03:42.981822 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-cert-syncer" Mar 18 09:03:42.981929 master-0 kubenswrapper[7620]: I0318 09:03:42.981835 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981929 master-0 kubenswrapper[7620]: I0318 09:03:42.981891 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.981929 master-0 kubenswrapper[7620]: I0318 09:03:42.981912 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-recovery-controller" Mar 18 09:03:42.981929 master-0 kubenswrapper[7620]: I0318 09:03:42.981929 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.982322 master-0 kubenswrapper[7620]: E0318 09:03:42.982297 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.982322 master-0 kubenswrapper[7620]: I0318 09:03:42.982320 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.982604 master-0 kubenswrapper[7620]: I0318 09:03:42.982572 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:03:42.982679 master-0 kubenswrapper[7620]: I0318 09:03:42.982613 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:03:43.147235 master-0 kubenswrapper[7620]: I0318 09:03:43.147180 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:43.147354 master-0 kubenswrapper[7620]: I0318 09:03:43.147316 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:43.166400 master-0 kubenswrapper[7620]: I0318 09:03:43.166322 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/3.log" Mar 18 09:03:43.167947 master-0 kubenswrapper[7620]: I0318 09:03:43.167919 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/kube-controller-manager-cert-syncer/0.log" Mar 18 09:03:43.168413 master-0 kubenswrapper[7620]: I0318 09:03:43.168399 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/kube-controller-manager/0.log" Mar 18 09:03:43.168552 master-0 kubenswrapper[7620]: I0318 09:03:43.168539 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:43.172780 master-0 kubenswrapper[7620]: I0318 09:03:43.172713 7620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c229b92d307e46237f6273edcc98d387" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" Mar 18 09:03:43.249411 master-0 kubenswrapper[7620]: I0318 09:03:43.249113 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:43.249650 master-0 kubenswrapper[7620]: I0318 09:03:43.249414 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:43.250153 master-0 kubenswrapper[7620]: I0318 09:03:43.249783 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:43.250153 master-0 kubenswrapper[7620]: I0318 09:03:43.249923 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:43.351741 master-0 kubenswrapper[7620]: I0318 09:03:43.351620 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-cert-dir\") pod \"c229b92d307e46237f6273edcc98d387\" (UID: \"c229b92d307e46237f6273edcc98d387\") " Mar 18 09:03:43.351741 master-0 kubenswrapper[7620]: I0318 09:03:43.351765 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-resource-dir\") pod \"c229b92d307e46237f6273edcc98d387\" (UID: \"c229b92d307e46237f6273edcc98d387\") " Mar 18 09:03:43.352252 master-0 kubenswrapper[7620]: I0318 09:03:43.352141 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "c229b92d307e46237f6273edcc98d387" (UID: "c229b92d307e46237f6273edcc98d387"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:43.352409 master-0 kubenswrapper[7620]: I0318 09:03:43.352316 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "c229b92d307e46237f6273edcc98d387" (UID: "c229b92d307e46237f6273edcc98d387"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:43.352561 master-0 kubenswrapper[7620]: I0318 09:03:43.352474 7620 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:43.352561 master-0 kubenswrapper[7620]: I0318 09:03:43.352495 7620 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c229b92d307e46237f6273edcc98d387-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:43.826665 master-0 kubenswrapper[7620]: I0318 09:03:43.826600 7620 generic.go:334] "Generic (PLEG): container finished" podID="3068e569-5a4e-4fc3-88f4-5684d093c8e6" containerID="54302cdad4a743df0858f296cab89bada38f903f22c51e9048d06d7146e16775" exitCode=0 Mar 18 09:03:43.826665 master-0 kubenswrapper[7620]: I0318 09:03:43.826675 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3068e569-5a4e-4fc3-88f4-5684d093c8e6","Type":"ContainerDied","Data":"54302cdad4a743df0858f296cab89bada38f903f22c51e9048d06d7146e16775"} Mar 18 09:03:43.831343 master-0 kubenswrapper[7620]: I0318 09:03:43.831278 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/cluster-policy-controller/3.log" Mar 18 09:03:43.833134 master-0 kubenswrapper[7620]: I0318 09:03:43.833081 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/kube-controller-manager-cert-syncer/0.log" Mar 18 09:03:43.834484 master-0 kubenswrapper[7620]: I0318 09:03:43.834445 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/kube-controller-manager/0.log" Mar 18 09:03:43.834722 master-0 kubenswrapper[7620]: I0318 09:03:43.834690 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="83c47aaabc2b561d44e630d0889d72720d976ad68c17142beae85f320c2926a1" exitCode=0 Mar 18 09:03:43.834904 master-0 kubenswrapper[7620]: I0318 09:03:43.834754 7620 scope.go:117] "RemoveContainer" containerID="25aa8e7a5fe1cd4cb308d45095cfc8ec891476603ff1037e70498c15fb355808" Mar 18 09:03:43.835021 master-0 kubenswrapper[7620]: I0318 09:03:43.834904 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:43.835021 master-0 kubenswrapper[7620]: I0318 09:03:43.834845 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="20f67081f1a83df8fa8825fe68b2011f445e7f6dd6a012bd23cbd198b1272dee" exitCode=0 Mar 18 09:03:43.835021 master-0 kubenswrapper[7620]: I0318 09:03:43.834982 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="9e36a51bcf12ae7db2a94f2fd56063ee6085dd854239e6802000e5e8cda9a85b" exitCode=0 Mar 18 09:03:43.835021 master-0 kubenswrapper[7620]: I0318 09:03:43.834999 7620 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="5c751dbb03b0e78f3ed7a9a2441228c32321443d29de48b1bf17ef0e83072bd3" exitCode=2 Mar 18 09:03:43.835334 master-0 kubenswrapper[7620]: I0318 09:03:43.835049 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86fa4125270c3c49a4a19e870a994342691ddd1c81df5fef0113e7b2940e9561" Mar 18 09:03:43.862444 master-0 kubenswrapper[7620]: I0318 09:03:43.862380 7620 scope.go:117] "RemoveContainer" containerID="d3073dc46ac31370e3b380a38f0a5624ea2c98824ecd27b578b4114468b40e36" Mar 18 09:03:43.863439 master-0 kubenswrapper[7620]: I0318 09:03:43.863363 7620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c229b92d307e46237f6273edcc98d387" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" Mar 18 09:03:43.881112 master-0 kubenswrapper[7620]: I0318 09:03:43.881036 7620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c229b92d307e46237f6273edcc98d387" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" Mar 18 09:03:44.240592 master-0 kubenswrapper[7620]: I0318 09:03:44.240516 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c229b92d307e46237f6273edcc98d387" path="/var/lib/kubelet/pods/c229b92d307e46237f6273edcc98d387/volumes" Mar 18 09:03:44.844896 master-0 kubenswrapper[7620]: I0318 09:03:44.844822 7620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c229b92d307e46237f6273edcc98d387/kube-controller-manager-cert-syncer/0.log" Mar 18 09:03:45.140324 master-0 kubenswrapper[7620]: I0318 09:03:45.140286 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:45.280172 master-0 kubenswrapper[7620]: I0318 09:03:45.280110 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kube-api-access\") pod \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " Mar 18 09:03:45.280665 master-0 kubenswrapper[7620]: I0318 09:03:45.280204 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kubelet-dir\") pod \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " Mar 18 09:03:45.280665 master-0 kubenswrapper[7620]: I0318 09:03:45.280279 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-var-lock\") pod \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\" (UID: \"3068e569-5a4e-4fc3-88f4-5684d093c8e6\") " Mar 18 09:03:45.280665 master-0 kubenswrapper[7620]: I0318 09:03:45.280392 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3068e569-5a4e-4fc3-88f4-5684d093c8e6" (UID: "3068e569-5a4e-4fc3-88f4-5684d093c8e6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:45.280665 master-0 kubenswrapper[7620]: I0318 09:03:45.280521 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-var-lock" (OuterVolumeSpecName: "var-lock") pod "3068e569-5a4e-4fc3-88f4-5684d093c8e6" (UID: "3068e569-5a4e-4fc3-88f4-5684d093c8e6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:45.280843 master-0 kubenswrapper[7620]: I0318 09:03:45.280820 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:45.280843 master-0 kubenswrapper[7620]: I0318 09:03:45.280839 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3068e569-5a4e-4fc3-88f4-5684d093c8e6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:45.283160 master-0 kubenswrapper[7620]: I0318 09:03:45.283106 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3068e569-5a4e-4fc3-88f4-5684d093c8e6" (UID: "3068e569-5a4e-4fc3-88f4-5684d093c8e6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:03:45.383725 master-0 kubenswrapper[7620]: I0318 09:03:45.383526 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3068e569-5a4e-4fc3-88f4-5684d093c8e6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:45.857992 master-0 kubenswrapper[7620]: I0318 09:03:45.857918 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3068e569-5a4e-4fc3-88f4-5684d093c8e6","Type":"ContainerDied","Data":"564cb8426369721ba7067b6ba1d2db58be0d2b7219cd8ee2b9c066b14b29b589"} Mar 18 09:03:45.859695 master-0 kubenswrapper[7620]: I0318 09:03:45.859658 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564cb8426369721ba7067b6ba1d2db58be0d2b7219cd8ee2b9c066b14b29b589" Mar 18 09:03:45.859905 master-0 kubenswrapper[7620]: I0318 09:03:45.858087 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:03:56.955551 master-0 kubenswrapper[7620]: I0318 09:03:56.955455 7620 generic.go:334] "Generic (PLEG): container finished" podID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerID="a4436209a1c80a403c36e67bb8b4310cdae3c04ffc3d3675bb5372419c24b948" exitCode=0 Mar 18 09:03:56.955551 master-0 kubenswrapper[7620]: I0318 09:03:56.955508 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerDied","Data":"a4436209a1c80a403c36e67bb8b4310cdae3c04ffc3d3675bb5372419c24b948"} Mar 18 09:03:56.955551 master-0 kubenswrapper[7620]: I0318 09:03:56.955564 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" event={"ID":"ad4cf9b2-4e66-4921-a30c-7b659bff06ab","Type":"ContainerStarted","Data":"0128c1ea9b1d4950ffa5f6752eab918a7e46d3902fc3a54d21a7e581b72d5af7"} Mar 18 09:03:56.956689 master-0 kubenswrapper[7620]: I0318 09:03:56.955603 7620 scope.go:117] "RemoveContainer" containerID="4a7dbd9949adb4dd8d63e9de3470c7186002c65ba78caccdd813c4fb43556282" Mar 18 09:03:57.479431 master-0 kubenswrapper[7620]: I0318 09:03:57.479370 7620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:03:57.482508 master-0 kubenswrapper[7620]: I0318 09:03:57.482429 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:57.482508 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:57.482508 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:57.482508 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:57.483112 master-0 kubenswrapper[7620]: I0318 09:03:57.482524 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:57.542094 master-0 kubenswrapper[7620]: I0318 09:03:57.541999 7620 scope.go:117] "RemoveContainer" containerID="f16aa514802c2b1e949ae0cfb51e228ea684c95d020ba4b520a18da905fe2dcf" Mar 18 09:03:58.223679 master-0 kubenswrapper[7620]: I0318 09:03:58.223603 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:58.264509 master-0 kubenswrapper[7620]: I0318 09:03:58.264423 7620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="1fb3ef25-9cec-4754-9bab-41963fc5d31d" Mar 18 09:03:58.264509 master-0 kubenswrapper[7620]: I0318 09:03:58.264480 7620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="1fb3ef25-9cec-4754-9bab-41963fc5d31d" Mar 18 09:03:58.285024 master-0 kubenswrapper[7620]: I0318 09:03:58.284936 7620 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:58.295564 master-0 kubenswrapper[7620]: I0318 09:03:58.295485 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:03:58.304820 master-0 kubenswrapper[7620]: I0318 09:03:58.304745 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:03:58.310463 master-0 kubenswrapper[7620]: I0318 09:03:58.310390 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:03:58.315232 master-0 kubenswrapper[7620]: I0318 09:03:58.315065 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:03:58.323842 master-0 kubenswrapper[7620]: W0318 09:03:58.323780 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod221b44bcdfcd6cb77b8e2c3e2f0f2d4d.slice/crio-78bf827b88ee656669c068d855b66ac1c4ec3fa61f0cd2ad36e3510f8a53aa74 WatchSource:0}: Error finding container 78bf827b88ee656669c068d855b66ac1c4ec3fa61f0cd2ad36e3510f8a53aa74: Status 404 returned error can't find the container with id 78bf827b88ee656669c068d855b66ac1c4ec3fa61f0cd2ad36e3510f8a53aa74 Mar 18 09:03:58.482019 master-0 kubenswrapper[7620]: I0318 09:03:58.481965 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:58.482019 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:58.482019 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:58.482019 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:58.482246 master-0 kubenswrapper[7620]: I0318 09:03:58.482047 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:58.976909 master-0 kubenswrapper[7620]: I0318 09:03:58.976862 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d"} Mar 18 09:03:58.977055 master-0 kubenswrapper[7620]: I0318 09:03:58.976919 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2"} Mar 18 09:03:58.977055 master-0 kubenswrapper[7620]: I0318 09:03:58.976933 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"78bf827b88ee656669c068d855b66ac1c4ec3fa61f0cd2ad36e3510f8a53aa74"} Mar 18 09:03:59.482075 master-0 kubenswrapper[7620]: I0318 09:03:59.482013 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:59.482075 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:03:59.482075 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:03:59.482075 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:03:59.482724 master-0 kubenswrapper[7620]: I0318 09:03:59.482108 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:59.986632 master-0 kubenswrapper[7620]: I0318 09:03:59.986585 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a"} Mar 18 09:03:59.986969 master-0 kubenswrapper[7620]: I0318 09:03:59.986944 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13"} Mar 18 09:04:00.013402 master-0 kubenswrapper[7620]: I0318 09:04:00.013314 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.013295783 podStartE2EDuration="2.013295783s" podCreationTimestamp="2026-03-18 09:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:00.010972011 +0000 UTC m=+904.005753803" watchObservedRunningTime="2026-03-18 09:04:00.013295783 +0000 UTC m=+904.008077535" Mar 18 09:04:00.481933 master-0 kubenswrapper[7620]: I0318 09:04:00.481430 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:00.481933 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:04:00.481933 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:04:00.481933 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:04:00.481933 master-0 kubenswrapper[7620]: I0318 09:04:00.481511 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:01.483438 master-0 kubenswrapper[7620]: I0318 09:04:01.483324 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:01.483438 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:04:01.483438 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:04:01.483438 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:04:01.483438 master-0 kubenswrapper[7620]: I0318 09:04:01.483437 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:02.483123 master-0 kubenswrapper[7620]: I0318 09:04:02.483020 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:02.483123 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:04:02.483123 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:04:02.483123 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:04:02.484833 master-0 kubenswrapper[7620]: I0318 09:04:02.483147 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:02.787199 master-0 kubenswrapper[7620]: I0318 09:04:02.787115 7620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:04:02.787531 master-0 kubenswrapper[7620]: I0318 09:04:02.787368 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://965c96bceffdf0d2dfe6811ad54d4d08d2afc86948c8800b709c2385cc93d84e" gracePeriod=30 Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.788237 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: E0318 09:04:02.788540 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3068e569-5a4e-4fc3-88f4-5684d093c8e6" containerName="installer" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.788557 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="3068e569-5a4e-4fc3-88f4-5684d093c8e6" containerName="installer" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: E0318 09:04:02.788574 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.788581 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: E0318 09:04:02.788615 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.788623 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.788799 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.788816 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.788826 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.788837 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="3068e569-5a4e-4fc3-88f4-5684d093c8e6" containerName="installer" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: E0318 09:04:02.789194 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.789377 master-0 kubenswrapper[7620]: I0318 09:04:02.789205 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:02.790438 master-0 kubenswrapper[7620]: I0318 09:04:02.790394 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:02.875513 master-0 kubenswrapper[7620]: I0318 09:04:02.875440 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:02.875831 master-0 kubenswrapper[7620]: I0318 09:04:02.875538 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:02.976826 master-0 kubenswrapper[7620]: I0318 09:04:02.976730 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:02.977178 master-0 kubenswrapper[7620]: I0318 09:04:02.976848 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:02.977178 master-0 kubenswrapper[7620]: I0318 09:04:02.976900 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:02.977178 master-0 kubenswrapper[7620]: I0318 09:04:02.977026 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:02.987954 master-0 kubenswrapper[7620]: I0318 09:04:02.982195 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:02.989260 master-0 kubenswrapper[7620]: I0318 09:04:02.989204 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:04:02.997170 master-0 kubenswrapper[7620]: I0318 09:04:02.997095 7620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:04:03.044663 master-0 kubenswrapper[7620]: W0318 09:04:03.040108 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11a2f93448b9d54da9854663936e2b73.slice/crio-f2c2ecd78b0b095cca6d610f53e1ff83eedc17b6a054e2d1a3484b11ec8181f6 WatchSource:0}: Error finding container f2c2ecd78b0b095cca6d610f53e1ff83eedc17b6a054e2d1a3484b11ec8181f6: Status 404 returned error can't find the container with id f2c2ecd78b0b095cca6d610f53e1ff83eedc17b6a054e2d1a3484b11ec8181f6 Mar 18 09:04:03.044663 master-0 kubenswrapper[7620]: I0318 09:04:03.040413 7620 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="a5fe57c6-bc68-4dbb-9ebf-25b6830fc04c" Mar 18 09:04:03.047177 master-0 kubenswrapper[7620]: I0318 09:04:03.046839 7620 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="965c96bceffdf0d2dfe6811ad54d4d08d2afc86948c8800b709c2385cc93d84e" exitCode=0 Mar 18 09:04:03.047177 master-0 kubenswrapper[7620]: I0318 09:04:03.046936 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1" Mar 18 09:04:03.047177 master-0 kubenswrapper[7620]: I0318 09:04:03.046956 7620 scope.go:117] "RemoveContainer" containerID="db516bae26a48292c2104c2ecfafa39292fbbc58aaf43ed786161ac8d6801cb8" Mar 18 09:04:03.047177 master-0 kubenswrapper[7620]: I0318 09:04:03.047091 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:04:03.077699 master-0 kubenswrapper[7620]: I0318 09:04:03.077632 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 09:04:03.077809 master-0 kubenswrapper[7620]: I0318 09:04:03.077754 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 09:04:03.078292 master-0 kubenswrapper[7620]: I0318 09:04:03.078256 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:03.078415 master-0 kubenswrapper[7620]: I0318 09:04:03.078357 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:03.180508 master-0 kubenswrapper[7620]: I0318 09:04:03.180427 7620 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:03.180508 master-0 kubenswrapper[7620]: I0318 09:04:03.180496 7620 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:03.481865 master-0 kubenswrapper[7620]: I0318 09:04:03.481785 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:03.481865 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:04:03.481865 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:04:03.481865 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:04:03.482191 master-0 kubenswrapper[7620]: I0318 09:04:03.481879 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:04.057525 master-0 kubenswrapper[7620]: I0318 09:04:04.057438 7620 generic.go:334] "Generic (PLEG): container finished" podID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" containerID="44961de8599bb63e15f17ececbcbbdf128ff00606cbb65189b93cdcbe9f41ba2" exitCode=0 Mar 18 09:04:04.058230 master-0 kubenswrapper[7620]: I0318 09:04:04.057568 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"bfb95119-ed96-428c-8a9b-7e29f48b5d4b","Type":"ContainerDied","Data":"44961de8599bb63e15f17ececbcbbdf128ff00606cbb65189b93cdcbe9f41ba2"} Mar 18 09:04:04.064396 master-0 kubenswrapper[7620]: I0318 09:04:04.064326 7620 generic.go:334] "Generic (PLEG): container finished" podID="11a2f93448b9d54da9854663936e2b73" containerID="8518fd5fa5f57002df2dc9e0199a7271feebc95e929446acfa8563e63e176f72" exitCode=0 Mar 18 09:04:04.064396 master-0 kubenswrapper[7620]: I0318 09:04:04.064394 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerDied","Data":"8518fd5fa5f57002df2dc9e0199a7271feebc95e929446acfa8563e63e176f72"} Mar 18 09:04:04.064756 master-0 kubenswrapper[7620]: I0318 09:04:04.064436 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"f2c2ecd78b0b095cca6d610f53e1ff83eedc17b6a054e2d1a3484b11ec8181f6"} Mar 18 09:04:04.238098 master-0 kubenswrapper[7620]: I0318 09:04:04.238026 7620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 18 09:04:04.238544 master-0 kubenswrapper[7620]: I0318 09:04:04.238515 7620 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 18 09:04:04.252714 master-0 kubenswrapper[7620]: I0318 09:04:04.252655 7620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:04:04.252714 master-0 kubenswrapper[7620]: I0318 09:04:04.252702 7620 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="a5fe57c6-bc68-4dbb-9ebf-25b6830fc04c" Mar 18 09:04:04.255750 master-0 kubenswrapper[7620]: I0318 09:04:04.255708 7620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:04:04.256035 master-0 kubenswrapper[7620]: I0318 09:04:04.255993 7620 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="a5fe57c6-bc68-4dbb-9ebf-25b6830fc04c" Mar 18 09:04:04.483175 master-0 kubenswrapper[7620]: I0318 09:04:04.483092 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:04.483175 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:04:04.483175 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:04:04.483175 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:04:04.483618 master-0 kubenswrapper[7620]: I0318 09:04:04.483220 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:05.079196 master-0 kubenswrapper[7620]: I0318 09:04:05.079112 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"ad7502485ed3a449c63b6f15d39ff562ff07af0cd6bd752a9da1258223a6c65e"} Mar 18 09:04:05.079196 master-0 kubenswrapper[7620]: I0318 09:04:05.079199 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"a635c83202bec4f55d992caba66fbdd97cd46b5946ceda72de4cf60ec6fe987d"} Mar 18 09:04:05.079992 master-0 kubenswrapper[7620]: I0318 09:04:05.079224 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:05.079992 master-0 kubenswrapper[7620]: I0318 09:04:05.079249 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"3089c4545eabd68f6e478d7cb774f2b5eb5ad211b79b829bdc1706a3ac242a99"} Mar 18 09:04:05.114750 master-0 kubenswrapper[7620]: I0318 09:04:05.114642 7620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.114616071 podStartE2EDuration="3.114616071s" podCreationTimestamp="2026-03-18 09:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:05.109580196 +0000 UTC m=+909.104362008" watchObservedRunningTime="2026-03-18 09:04:05.114616071 +0000 UTC m=+909.109397833" Mar 18 09:04:05.388813 master-0 kubenswrapper[7620]: I0318 09:04:05.388748 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:04:05.481656 master-0 kubenswrapper[7620]: I0318 09:04:05.481579 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:05.481656 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:04:05.481656 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:04:05.481656 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:04:05.482131 master-0 kubenswrapper[7620]: I0318 09:04:05.482102 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:05.520740 master-0 kubenswrapper[7620]: I0318 09:04:05.520668 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-var-lock\") pod \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " Mar 18 09:04:05.521006 master-0 kubenswrapper[7620]: I0318 09:04:05.520786 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kube-api-access\") pod \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " Mar 18 09:04:05.521006 master-0 kubenswrapper[7620]: I0318 09:04:05.520808 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-var-lock" (OuterVolumeSpecName: "var-lock") pod "bfb95119-ed96-428c-8a9b-7e29f48b5d4b" (UID: "bfb95119-ed96-428c-8a9b-7e29f48b5d4b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:05.521006 master-0 kubenswrapper[7620]: I0318 09:04:05.520897 7620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kubelet-dir\") pod \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\" (UID: \"bfb95119-ed96-428c-8a9b-7e29f48b5d4b\") " Mar 18 09:04:05.521101 master-0 kubenswrapper[7620]: I0318 09:04:05.521022 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bfb95119-ed96-428c-8a9b-7e29f48b5d4b" (UID: "bfb95119-ed96-428c-8a9b-7e29f48b5d4b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:05.521384 master-0 kubenswrapper[7620]: I0318 09:04:05.521345 7620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:05.521429 master-0 kubenswrapper[7620]: I0318 09:04:05.521383 7620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:05.523962 master-0 kubenswrapper[7620]: I0318 09:04:05.523912 7620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bfb95119-ed96-428c-8a9b-7e29f48b5d4b" (UID: "bfb95119-ed96-428c-8a9b-7e29f48b5d4b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:05.623488 master-0 kubenswrapper[7620]: I0318 09:04:05.623329 7620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfb95119-ed96-428c-8a9b-7e29f48b5d4b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:06.013210 master-0 kubenswrapper[7620]: I0318 09:04:06.013120 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:04:06.013578 master-0 kubenswrapper[7620]: E0318 09:04:06.013539 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" containerName="installer" Mar 18 09:04:06.013578 master-0 kubenswrapper[7620]: I0318 09:04:06.013567 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" containerName="installer" Mar 18 09:04:06.013832 master-0 kubenswrapper[7620]: I0318 09:04:06.013725 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" containerName="installer" Mar 18 09:04:06.014373 master-0 kubenswrapper[7620]: I0318 09:04:06.014341 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.014494 master-0 kubenswrapper[7620]: I0318 09:04:06.014360 7620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:04:06.015028 master-0 kubenswrapper[7620]: I0318 09:04:06.014914 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://5ec3e7108eee8c08ca66f6f618d1955dea098f10f4832f7e925bd7f46bce001f" gracePeriod=15 Mar 18 09:04:06.015179 master-0 kubenswrapper[7620]: I0318 09:04:06.014954 7620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b0564925d47f5840821e3c795a9cfcae45b42d4975ada3f3aedc3639ab59cfb5" gracePeriod=15 Mar 18 09:04:06.017586 master-0 kubenswrapper[7620]: I0318 09:04:06.017511 7620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:04:06.018132 master-0 kubenswrapper[7620]: E0318 09:04:06.018075 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:04:06.018132 master-0 kubenswrapper[7620]: I0318 09:04:06.018129 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:04:06.018404 master-0 kubenswrapper[7620]: E0318 09:04:06.018159 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:04:06.018404 master-0 kubenswrapper[7620]: I0318 09:04:06.018170 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:04:06.018404 master-0 kubenswrapper[7620]: E0318 09:04:06.018209 7620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:04:06.018404 master-0 kubenswrapper[7620]: I0318 09:04:06.018219 7620 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:04:06.018810 master-0 kubenswrapper[7620]: I0318 09:04:06.018468 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:04:06.018810 master-0 kubenswrapper[7620]: I0318 09:04:06.018495 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:04:06.018810 master-0 kubenswrapper[7620]: I0318 09:04:06.018535 7620 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:04:06.021213 master-0 kubenswrapper[7620]: I0318 09:04:06.021153 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.091820 master-0 kubenswrapper[7620]: E0318 09:04:06.091123 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.099379 master-0 kubenswrapper[7620]: I0318 09:04:06.099307 7620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"bfb95119-ed96-428c-8a9b-7e29f48b5d4b","Type":"ContainerDied","Data":"898f7ad0780d754bd2a9eb084988e2a8df18f477faf934c2f22dfd1716e45de9"} Mar 18 09:04:06.099551 master-0 kubenswrapper[7620]: I0318 09:04:06.099376 7620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:04:06.099551 master-0 kubenswrapper[7620]: I0318 09:04:06.099383 7620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="898f7ad0780d754bd2a9eb084988e2a8df18f477faf934c2f22dfd1716e45de9" Mar 18 09:04:06.106635 master-0 kubenswrapper[7620]: E0318 09:04:06.106513 7620 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.116029 master-0 kubenswrapper[7620]: I0318 09:04:06.115923 7620 status_manager.go:851] "Failed to get status for pod" podUID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" pod="openshift-kube-scheduler/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:04:06.131116 master-0 kubenswrapper[7620]: I0318 09:04:06.130961 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.131116 master-0 kubenswrapper[7620]: I0318 09:04:06.131030 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.131116 master-0 kubenswrapper[7620]: I0318 09:04:06.131066 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.131491 master-0 kubenswrapper[7620]: I0318 09:04:06.131252 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.131634 master-0 kubenswrapper[7620]: I0318 09:04:06.131554 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.132305 master-0 kubenswrapper[7620]: I0318 09:04:06.132065 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.132625 master-0 kubenswrapper[7620]: I0318 09:04:06.132405 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.132836 master-0 kubenswrapper[7620]: I0318 09:04:06.132781 7620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.232792 master-0 kubenswrapper[7620]: I0318 09:04:06.232683 7620 status_manager.go:851] "Failed to get status for pod" podUID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" pod="openshift-kube-scheduler/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:04:06.234935 master-0 kubenswrapper[7620]: I0318 09:04:06.234836 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.234935 master-0 kubenswrapper[7620]: I0318 09:04:06.234919 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.235322 master-0 kubenswrapper[7620]: I0318 09:04:06.235113 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.235322 master-0 kubenswrapper[7620]: I0318 09:04:06.235178 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.235322 master-0 kubenswrapper[7620]: I0318 09:04:06.235269 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.235322 master-0 kubenswrapper[7620]: I0318 09:04:06.235315 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.235610 master-0 kubenswrapper[7620]: I0318 09:04:06.235412 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.235610 master-0 kubenswrapper[7620]: I0318 09:04:06.235509 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.235610 master-0 kubenswrapper[7620]: I0318 09:04:06.235578 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.235838 master-0 kubenswrapper[7620]: I0318 09:04:06.235656 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.235838 master-0 kubenswrapper[7620]: I0318 09:04:06.235722 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.235838 master-0 kubenswrapper[7620]: I0318 09:04:06.235773 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.236261 master-0 kubenswrapper[7620]: I0318 09:04:06.235902 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.236261 master-0 kubenswrapper[7620]: I0318 09:04:06.235930 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.236261 master-0 kubenswrapper[7620]: I0318 09:04:06.236012 7620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.236261 master-0 kubenswrapper[7620]: I0318 09:04:06.236092 7620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.393227 master-0 kubenswrapper[7620]: I0318 09:04:06.393035 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:06.408047 master-0 kubenswrapper[7620]: I0318 09:04:06.407959 7620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:06.436475 master-0 kubenswrapper[7620]: W0318 09:04:06.436381 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e7a82869988463543d3d8dd1f0b5fe3.slice/crio-32c5cad9d5ce7a6a9868e1321b49281ebb4f7769c90afec706cbbbe9a7cdbdd6 WatchSource:0}: Error finding container 32c5cad9d5ce7a6a9868e1321b49281ebb4f7769c90afec706cbbbe9a7cdbdd6: Status 404 returned error can't find the container with id 32c5cad9d5ce7a6a9868e1321b49281ebb4f7769c90afec706cbbbe9a7cdbdd6 Mar 18 09:04:06.441251 master-0 kubenswrapper[7620]: E0318 09:04:06.441005 7620 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189de41e28db9fa8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:8e7a82869988463543d3d8dd1f0b5fe3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:04:06.43947716 +0000 UTC m=+910.434258912,LastTimestamp:2026-03-18 09:04:06.43947716 +0000 UTC m=+910.434258912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:04:06.454654 master-0 kubenswrapper[7620]: W0318 09:04:06.454580 7620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45ea2ef1cf2bc9d1d994d6538ae0a64.slice/crio-4dced598bcd2040f1c605c245256a2161b2f459ac4faa81c6af5275d4099b859 WatchSource:0}: Error finding container 4dced598bcd2040f1c605c245256a2161b2f459ac4faa81c6af5275d4099b859: Status 404 returned error can't find the container with id 4dced598bcd2040f1c605c245256a2161b2f459ac4faa81c6af5275d4099b859 Mar 18 09:04:06.480417 master-0 kubenswrapper[7620]: I0318 09:04:06.480304 7620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:06.483493 master-0 kubenswrapper[7620]: I0318 09:04:06.483387 7620 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:06.483493 master-0 kubenswrapper[7620]: [-]has-synced failed: reason withheld Mar 18 09:04:06.483493 master-0 kubenswrapper[7620]: [+]process-running ok Mar 18 09:04:06.483493 master-0 kubenswrapper[7620]: healthz check failed Mar 18 09:04:06.483493 master-0 kubenswrapper[7620]: I0318 09:04:06.483457 7620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:06.774575 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 09:04:06.800901 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 09:04:06.801247 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 09:04:06.805618 master-0 systemd[1]: kubelet.service: Consumed 2min 31.150s CPU time. Mar 18 09:04:06.928564 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 09:04:07.033515 master-0 kubenswrapper[28766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:04:07.033515 master-0 kubenswrapper[28766]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 09:04:07.033515 master-0 kubenswrapper[28766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:04:07.033515 master-0 kubenswrapper[28766]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:04:07.033515 master-0 kubenswrapper[28766]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 09:04:07.034244 master-0 kubenswrapper[28766]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:04:07.034244 master-0 kubenswrapper[28766]: I0318 09:04:07.033641 28766 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 09:04:07.036500 master-0 kubenswrapper[28766]: W0318 09:04:07.036473 28766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:04:07.036500 master-0 kubenswrapper[28766]: W0318 09:04:07.036492 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:04:07.036500 master-0 kubenswrapper[28766]: W0318 09:04:07.036498 28766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:04:07.036500 master-0 kubenswrapper[28766]: W0318 09:04:07.036505 28766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036530 28766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036536 28766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036541 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036546 28766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036551 28766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036557 28766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036563 28766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036568 28766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036573 28766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036579 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036584 28766 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036589 28766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036594 28766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036599 28766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036603 28766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036608 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036619 28766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036624 28766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:04:07.036704 master-0 kubenswrapper[28766]: W0318 09:04:07.036629 28766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036633 28766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036639 28766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036646 28766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036653 28766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036659 28766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036665 28766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036671 28766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036676 28766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036681 28766 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036686 28766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036692 28766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036697 28766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036704 28766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036709 28766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036713 28766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036718 28766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036723 28766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036728 28766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:04:07.037400 master-0 kubenswrapper[28766]: W0318 09:04:07.036733 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036738 28766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036743 28766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036748 28766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036753 28766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036757 28766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036762 28766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036767 28766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036772 28766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036776 28766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036782 28766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036787 28766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036792 28766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036799 28766 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036804 28766 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036809 28766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036814 28766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036819 28766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036825 28766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036830 28766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:04:07.038222 master-0 kubenswrapper[28766]: W0318 09:04:07.036835 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036840 28766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036845 28766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036867 28766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036875 28766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036880 28766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036885 28766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036890 28766 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036895 28766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036901 28766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: W0318 09:04:07.036906 28766 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037046 28766 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037061 28766 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037072 28766 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037081 28766 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037089 28766 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037095 28766 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037104 28766 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037112 28766 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037120 28766 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037127 28766 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037134 28766 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 09:04:07.039270 master-0 kubenswrapper[28766]: I0318 09:04:07.037140 28766 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037147 28766 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037153 28766 flags.go:64] FLAG: --cgroup-root="" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037159 28766 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037165 28766 flags.go:64] FLAG: --client-ca-file="" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037171 28766 flags.go:64] FLAG: --cloud-config="" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037177 28766 flags.go:64] FLAG: --cloud-provider="" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037186 28766 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037195 28766 flags.go:64] FLAG: --cluster-domain="" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037203 28766 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037209 28766 flags.go:64] FLAG: --config-dir="" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037215 28766 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037222 28766 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037231 28766 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037237 28766 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037244 28766 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037250 28766 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037256 28766 flags.go:64] FLAG: --contention-profiling="false" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037262 28766 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037270 28766 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037277 28766 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037282 28766 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037291 28766 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037297 28766 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037303 28766 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 09:04:07.040326 master-0 kubenswrapper[28766]: I0318 09:04:07.037309 28766 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037314 28766 flags.go:64] FLAG: --enable-server="true" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037320 28766 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037328 28766 flags.go:64] FLAG: --event-burst="100" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037335 28766 flags.go:64] FLAG: --event-qps="50" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037341 28766 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037347 28766 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037353 28766 flags.go:64] FLAG: --eviction-hard="" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037361 28766 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037367 28766 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037373 28766 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037379 28766 flags.go:64] FLAG: --eviction-soft="" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037384 28766 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037390 28766 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037396 28766 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037402 28766 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037408 28766 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037413 28766 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037420 28766 flags.go:64] FLAG: --feature-gates="" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037482 28766 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037492 28766 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037499 28766 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037506 28766 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037512 28766 flags.go:64] FLAG: --healthz-port="10248" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037520 28766 flags.go:64] FLAG: --help="false" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037527 28766 flags.go:64] FLAG: --hostname-override="" Mar 18 09:04:07.041057 master-0 kubenswrapper[28766]: I0318 09:04:07.037532 28766 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037538 28766 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037544 28766 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037550 28766 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037556 28766 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037562 28766 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037568 28766 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037573 28766 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037579 28766 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037585 28766 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037592 28766 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037598 28766 flags.go:64] FLAG: --kube-reserved="" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037604 28766 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037609 28766 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037615 28766 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037621 28766 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037627 28766 flags.go:64] FLAG: --lock-file="" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037632 28766 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037638 28766 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037645 28766 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037654 28766 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037660 28766 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037667 28766 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037673 28766 flags.go:64] FLAG: --logging-format="text" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037679 28766 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 09:04:07.041683 master-0 kubenswrapper[28766]: I0318 09:04:07.037686 28766 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037692 28766 flags.go:64] FLAG: --manifest-url="" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037698 28766 flags.go:64] FLAG: --manifest-url-header="" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037708 28766 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037714 28766 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037722 28766 flags.go:64] FLAG: --max-pods="110" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037728 28766 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037734 28766 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037740 28766 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037746 28766 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037752 28766 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037758 28766 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037764 28766 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037782 28766 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037789 28766 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037794 28766 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037801 28766 flags.go:64] FLAG: --pod-cidr="" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037807 28766 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037816 28766 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037822 28766 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037827 28766 flags.go:64] FLAG: --pods-per-core="0" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037833 28766 flags.go:64] FLAG: --port="10250" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037839 28766 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037844 28766 flags.go:64] FLAG: --provider-id="" Mar 18 09:04:07.042321 master-0 kubenswrapper[28766]: I0318 09:04:07.037867 28766 flags.go:64] FLAG: --qos-reserved="" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037874 28766 flags.go:64] FLAG: --read-only-port="10255" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037881 28766 flags.go:64] FLAG: --register-node="true" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037887 28766 flags.go:64] FLAG: --register-schedulable="true" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037892 28766 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037904 28766 flags.go:64] FLAG: --registry-burst="10" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037910 28766 flags.go:64] FLAG: --registry-qps="5" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037916 28766 flags.go:64] FLAG: --reserved-cpus="" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037922 28766 flags.go:64] FLAG: --reserved-memory="" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037930 28766 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037936 28766 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037942 28766 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037948 28766 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037954 28766 flags.go:64] FLAG: --runonce="false" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037959 28766 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037965 28766 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037971 28766 flags.go:64] FLAG: --seccomp-default="false" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037976 28766 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037982 28766 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037989 28766 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.037994 28766 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.038001 28766 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.038007 28766 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.038012 28766 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.038019 28766 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 09:04:07.042936 master-0 kubenswrapper[28766]: I0318 09:04:07.038024 28766 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038030 28766 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038036 28766 flags.go:64] FLAG: --system-cgroups="" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038042 28766 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038051 28766 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038057 28766 flags.go:64] FLAG: --tls-cert-file="" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038062 28766 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038071 28766 flags.go:64] FLAG: --tls-min-version="" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038077 28766 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038084 28766 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038089 28766 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038095 28766 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038100 28766 flags.go:64] FLAG: --v="2" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038108 28766 flags.go:64] FLAG: --version="false" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038116 28766 flags.go:64] FLAG: --vmodule="" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038123 28766 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: I0318 09:04:07.038129 28766 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: W0318 09:04:07.038255 28766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: W0318 09:04:07.038264 28766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: W0318 09:04:07.038269 28766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: W0318 09:04:07.038275 28766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: W0318 09:04:07.038281 28766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: W0318 09:04:07.038287 28766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:04:07.043724 master-0 kubenswrapper[28766]: W0318 09:04:07.038292 28766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038300 28766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038306 28766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038312 28766 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038317 28766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038322 28766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038327 28766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038333 28766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038337 28766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038343 28766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038348 28766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038353 28766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038358 28766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038362 28766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038367 28766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038372 28766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038377 28766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038382 28766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038386 28766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038391 28766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:04:07.044341 master-0 kubenswrapper[28766]: W0318 09:04:07.038397 28766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038402 28766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038406 28766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038411 28766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038422 28766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038427 28766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038432 28766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038437 28766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038442 28766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038447 28766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038451 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038457 28766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038462 28766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038467 28766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038471 28766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038476 28766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038481 28766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038486 28766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038491 28766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038497 28766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:04:07.044843 master-0 kubenswrapper[28766]: W0318 09:04:07.038503 28766 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.038508 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.038513 28766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.038871 28766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.038880 28766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.038886 28766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.038891 28766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.038896 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.038900 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039116 28766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039122 28766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039127 28766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039132 28766 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039138 28766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039145 28766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039150 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039163 28766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039170 28766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039176 28766 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:04:07.045378 master-0 kubenswrapper[28766]: W0318 09:04:07.039182 28766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:04:07.045871 master-0 kubenswrapper[28766]: W0318 09:04:07.039187 28766 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:04:07.045871 master-0 kubenswrapper[28766]: W0318 09:04:07.039193 28766 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:04:07.045871 master-0 kubenswrapper[28766]: W0318 09:04:07.039199 28766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:04:07.045871 master-0 kubenswrapper[28766]: W0318 09:04:07.039204 28766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:04:07.045871 master-0 kubenswrapper[28766]: W0318 09:04:07.039209 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:04:07.045871 master-0 kubenswrapper[28766]: W0318 09:04:07.039214 28766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:04:07.045871 master-0 kubenswrapper[28766]: I0318 09:04:07.039221 28766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:04:07.048908 master-0 kubenswrapper[28766]: I0318 09:04:07.048805 28766 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 09:04:07.048908 master-0 kubenswrapper[28766]: I0318 09:04:07.048897 28766 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 09:04:07.049142 master-0 kubenswrapper[28766]: W0318 09:04:07.049107 28766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:04:07.049142 master-0 kubenswrapper[28766]: W0318 09:04:07.049134 28766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:04:07.049142 master-0 kubenswrapper[28766]: W0318 09:04:07.049143 28766 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049155 28766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049166 28766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049176 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049185 28766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049193 28766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049204 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049213 28766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049222 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:04:07.049224 master-0 kubenswrapper[28766]: W0318 09:04:07.049232 28766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049242 28766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049251 28766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049260 28766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049268 28766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049332 28766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049342 28766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049355 28766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049367 28766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049377 28766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049387 28766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049396 28766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049405 28766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049415 28766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049425 28766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049434 28766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049445 28766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049454 28766 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049464 28766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:04:07.049450 master-0 kubenswrapper[28766]: W0318 09:04:07.049476 28766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049486 28766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049495 28766 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049504 28766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049512 28766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049521 28766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049529 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049537 28766 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049545 28766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049554 28766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049562 28766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049570 28766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049579 28766 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049588 28766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049596 28766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049606 28766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049617 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049626 28766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049635 28766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:04:07.049956 master-0 kubenswrapper[28766]: W0318 09:04:07.049643 28766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049651 28766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049659 28766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049670 28766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049682 28766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049691 28766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049701 28766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049710 28766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049719 28766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049772 28766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049785 28766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049796 28766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049806 28766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049815 28766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049824 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049834 28766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049844 28766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049878 28766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049889 28766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:04:07.050471 master-0 kubenswrapper[28766]: W0318 09:04:07.049897 28766 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.049905 28766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.049913 28766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.049922 28766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: I0318 09:04:07.049936 28766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050186 28766 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050203 28766 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050214 28766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050223 28766 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050232 28766 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050241 28766 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050251 28766 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050259 28766 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050268 28766 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050277 28766 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050286 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:04:07.050980 master-0 kubenswrapper[28766]: W0318 09:04:07.050294 28766 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050306 28766 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050315 28766 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050324 28766 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050332 28766 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050341 28766 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050349 28766 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050357 28766 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050480 28766 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050498 28766 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050510 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050519 28766 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050527 28766 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050568 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050580 28766 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050590 28766 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050599 28766 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050610 28766 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050620 28766 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:04:07.051382 master-0 kubenswrapper[28766]: W0318 09:04:07.050630 28766 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050642 28766 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050651 28766 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050659 28766 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050668 28766 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050677 28766 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050688 28766 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050699 28766 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050710 28766 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050720 28766 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050729 28766 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050738 28766 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050750 28766 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050759 28766 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050768 28766 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050778 28766 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050788 28766 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050797 28766 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050806 28766 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050815 28766 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:04:07.051987 master-0 kubenswrapper[28766]: W0318 09:04:07.050824 28766 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050833 28766 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050842 28766 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050850 28766 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050882 28766 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050891 28766 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050900 28766 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050909 28766 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050917 28766 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050926 28766 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050934 28766 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050944 28766 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050953 28766 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050962 28766 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050970 28766 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050979 28766 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050989 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.050997 28766 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.051006 28766 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.051015 28766 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:04:07.052542 master-0 kubenswrapper[28766]: W0318 09:04:07.051023 28766 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:04:07.053062 master-0 kubenswrapper[28766]: W0318 09:04:07.051034 28766 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:04:07.053062 master-0 kubenswrapper[28766]: I0318 09:04:07.051051 28766 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:04:07.053062 master-0 kubenswrapper[28766]: I0318 09:04:07.051421 28766 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 09:04:07.055147 master-0 kubenswrapper[28766]: I0318 09:04:07.055101 28766 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 09:04:07.055532 master-0 kubenswrapper[28766]: I0318 09:04:07.055439 28766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 09:04:07.056208 master-0 kubenswrapper[28766]: I0318 09:04:07.056166 28766 server.go:997] "Starting client certificate rotation" Mar 18 09:04:07.056270 master-0 kubenswrapper[28766]: I0318 09:04:07.056207 28766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 09:04:07.057048 master-0 kubenswrapper[28766]: I0318 09:04:07.056464 28766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 08:38:09 +0000 UTC, rotation deadline is 2026-03-19 01:52:35.745022662 +0000 UTC Mar 18 09:04:07.057105 master-0 kubenswrapper[28766]: I0318 09:04:07.057045 28766 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 16h48m28.687986385s for next certificate rotation Mar 18 09:04:07.057738 master-0 kubenswrapper[28766]: I0318 09:04:07.057695 28766 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:04:07.061385 master-0 kubenswrapper[28766]: I0318 09:04:07.061305 28766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:04:07.065523 master-0 kubenswrapper[28766]: I0318 09:04:07.065479 28766 log.go:25] "Validated CRI v1 runtime API" Mar 18 09:04:07.072521 master-0 kubenswrapper[28766]: I0318 09:04:07.072432 28766 log.go:25] "Validated CRI v1 image API" Mar 18 09:04:07.075179 master-0 kubenswrapper[28766]: I0318 09:04:07.075133 28766 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 09:04:07.095124 master-0 kubenswrapper[28766]: I0318 09:04:07.095051 28766 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 9d22b218-6091-4693-b191-06a05a0aba6f:/dev/vda3] Mar 18 09:04:07.100891 master-0 kubenswrapper[28766]: I0318 09:04:07.095115 28766 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/00b7669c60621e059b9f2a3185ba93db56934e35fa8fa0713c09f3decdea9378/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/00b7669c60621e059b9f2a3185ba93db56934e35fa8fa0713c09f3decdea9378/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/01fc205ca60889e86b938272f49efc7613d39ee0f345e6249d36f7dbe33a148e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/01fc205ca60889e86b938272f49efc7613d39ee0f345e6249d36f7dbe33a148e/userdata/shm major:0 minor:486 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/08c69ca72893cd876b16b5740d0ac91db39852d0fe47a473761270d55d7436d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/08c69ca72893cd876b16b5740d0ac91db39852d0fe47a473761270d55d7436d0/userdata/shm major:0 minor:1140 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0abbacca379cb1aa4703d3e53f8d0cf0d9cc8837c199cd99507dcb84dbe142a8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0abbacca379cb1aa4703d3e53f8d0cf0d9cc8837c199cd99507dcb84dbe142a8/userdata/shm major:0 minor:1088 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0cdcdcd2ccccdebd6503233827667ed7ce6f4654db0dc10c48bcf238245e2d46/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0cdcdcd2ccccdebd6503233827667ed7ce6f4654db0dc10c48bcf238245e2d46/userdata/shm major:0 minor:733 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb/userdata/shm major:0 minor:116 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/156dd659cded87fed4f4d9c1948aa273d3ce5df8a947527d51220517f67ececc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/156dd659cded87fed4f4d9c1948aa273d3ce5df8a947527d51220517f67ececc/userdata/shm major:0 minor:740 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/15b9cae2d28df4fa59242b209b16efd412d30453ba1d9f0bfc42c07c896efdb2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/15b9cae2d28df4fa59242b209b16efd412d30453ba1d9f0bfc42c07c896efdb2/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/176bf98298dce9ebeff9e6cf55f250f7b8583bdf4845838e239879972b0093f1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/176bf98298dce9ebeff9e6cf55f250f7b8583bdf4845838e239879972b0093f1/userdata/shm major:0 minor:571 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1bf9cb47892d0288027c6bb37223daf6c06c5b704eeeaa16637e3e622b28899a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1bf9cb47892d0288027c6bb37223daf6c06c5b704eeeaa16637e3e622b28899a/userdata/shm major:0 minor:779 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/21254471a19094b73e6733114f96329319386cc402e4cbd645f5a024b798fc80/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/21254471a19094b73e6733114f96329319386cc402e4cbd645f5a024b798fc80/userdata/shm major:0 minor:783 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/23865ef5bfea471643359580ecae55517bf670fdb3b8b05c871c139fe34b55d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/23865ef5bfea471643359580ecae55517bf670fdb3b8b05c871c139fe34b55d5/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/26ecaeebed65d3cea64cdc63150668e13ecd2fef68a18e11955a52673f9e9975/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/26ecaeebed65d3cea64cdc63150668e13ecd2fef68a18e11955a52673f9e9975/userdata/shm major:0 minor:504 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27e819688a289fa256559a318b6523e53569525673491824d2f15c32bbc44e17/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27e819688a289fa256559a318b6523e53569525673491824d2f15c32bbc44e17/userdata/shm major:0 minor:823 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2c337c8902968583bee083c15c603882d48753850a36d0d861e8e0df75e9ad06/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2c337c8902968583bee083c15c603882d48753850a36d0d861e8e0df75e9ad06/userdata/shm major:0 minor:880 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e229ef6f57fea8e5406ee6259b2efa0f8a16c288c8a29c71c1e32c057bf84d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e229ef6f57fea8e5406ee6259b2efa0f8a16c288c8a29c71c1e32c057bf84d0/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2f2e86c1c0e64c2e65cdc84455f83de896f426c03295ce65094d278bb54d2594/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2f2e86c1c0e64c2e65cdc84455f83de896f426c03295ce65094d278bb54d2594/userdata/shm major:0 minor:434 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1/userdata/shm major:0 minor:246 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/32c5cad9d5ce7a6a9868e1321b49281ebb4f7769c90afec706cbbbe9a7cdbdd6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/32c5cad9d5ce7a6a9868e1321b49281ebb4f7769c90afec706cbbbe9a7cdbdd6/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/32faaf71e97855a1cb6aa3bd19d52c689531407fd638810606403df329a94675/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/32faaf71e97855a1cb6aa3bd19d52c689531407fd638810606403df329a94675/userdata/shm major:0 minor:91 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35bb7224fe9eca618f0100241589daaf5b90ad54413934d086e067f2a229eae2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35bb7224fe9eca618f0100241589daaf5b90ad54413934d086e067f2a229eae2/userdata/shm major:0 minor:758 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3c4e15b0e2e376b6219a5a7e0e6e767c17e2686b088653fbb672e0c430635638/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3c4e15b0e2e376b6219a5a7e0e6e767c17e2686b088653fbb672e0c430635638/userdata/shm major:0 minor:1016 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3c7483d94d4b729fb2442b8f5c55aceeebc0aac5c97dd559a0179898c48164c2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3c7483d94d4b729fb2442b8f5c55aceeebc0aac5c97dd559a0179898c48164c2/userdata/shm major:0 minor:49 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/477f7fc213175cb954b186d8ae344e645aa5b57eb7978240c62ca1b2bcc281be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/477f7fc213175cb954b186d8ae344e645aa5b57eb7978240c62ca1b2bcc281be/userdata/shm major:0 minor:1018 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4cc1a3bde7a78af95462a4b4f6ce986942ed4140ae91386507e1857084f8fcea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4cc1a3bde7a78af95462a4b4f6ce986942ed4140ae91386507e1857084f8fcea/userdata/shm major:0 minor:866 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4dced598bcd2040f1c605c245256a2161b2f459ac4faa81c6af5275d4099b859/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4dced598bcd2040f1c605c245256a2161b2f459ac4faa81c6af5275d4099b859/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4fb480fe238d2202b063fb165afa539e61290f53ee162d859e36d1d4cd81bfd5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4fb480fe238d2202b063fb165afa539e61290f53ee162d859e36d1d4cd81bfd5/userdata/shm major:0 minor:475 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/52447280dead3b5a28af890c9c1936e68858aa0be2da0967ec252697841e8f7d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/52447280dead3b5a28af890c9c1936e68858aa0be2da0967ec252697841e8f7d/userdata/shm major:0 minor:1086 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb/userdata/shm major:0 minor:987 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/55b41391fdb5cf271845bf26cd3e0f895b338fd5cf036e303350901534473728/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/55b41391fdb5cf271845bf26cd3e0f895b338fd5cf036e303350901534473728/userdata/shm major:0 minor:569 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5a2943917dc38b0012b7ecf0b0d92cb0eaf6fda9f9ba0f60f4167aa1dddca628/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5a2943917dc38b0012b7ecf0b0d92cb0eaf6fda9f9ba0f60f4167aa1dddca628/userdata/shm major:0 minor:353 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6634f9815dab75e36ab077ad26870775c6b66428323ea93fb4028cdabc9be608/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6634f9815dab75e36ab077ad26870775c6b66428323ea93fb4028cdabc9be608/userdata/shm major:0 minor:776 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f40c8c2653002ea6e916a625294f3f884745ae3fd33ab733118256908cbb925/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f40c8c2653002ea6e916a625294f3f884745ae3fd33ab733118256908cbb925/userdata/shm major:0 minor:506 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7375d00faec570babb78f641885c44d45133bd27ded2430ca3ed60792534d150/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7375d00faec570babb78f641885c44d45133bd27ded2430ca3ed60792534d150/userdata/shm major:0 minor:765 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/78bf827b88ee656669c068d855b66ac1c4ec3fa61f0cd2ad36e3510f8a53aa74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/78bf827b88ee656669c068d855b66ac1c4ec3fa61f0cd2ad36e3510f8a53aa74/userdata/shm major:0 minor:65 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7b6fb81fa9b3775db2a9d43b8034ee4a9a2939e8e74ced3195abe4a7116a137d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7b6fb81fa9b3775db2a9d43b8034ee4a9a2939e8e74ced3195abe4a7116a137d/userdata/shm major:0 minor:451 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d31e16adf7f10cb16f9f4afb5a9c559f636c495a15abd8700657562f8afa08b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d31e16adf7f10cb16f9f4afb5a9c559f636c495a15abd8700657562f8afa08b/userdata/shm major:0 minor:993 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d69a2aa0453ffd9d52f608b0f589cc8cbacbdbc94e468d5326ece0a3282eddd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d69a2aa0453ffd9d52f608b0f589cc8cbacbdbc94e468d5326ece0a3282eddd/userdata/shm major:0 minor:566 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d9881841018d229060672bdf33946e413258966dde9be04451521b3c0265667/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d9881841018d229060672bdf33946e413258966dde9be04451521b3c0265667/userdata/shm major:0 minor:886 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/818594107c19b8863e506e8d4f0498cc1facb30c01ff790168223f67dc1385ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/818594107c19b8863e506e8d4f0498cc1facb30c01ff790168223f67dc1385ac/userdata/shm major:0 minor:582 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/837527d2f9f7319ea14fc20367ef17853e00cc20e938fc1184f891aa57296deb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/837527d2f9f7319ea14fc20367ef17853e00cc20e938fc1184f891aa57296deb/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/862f349be451274c2786c24620a1b3df5221d5b66e16cc9b0099daecc5ae9693/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/862f349be451274c2786c24620a1b3df5221d5b66e16cc9b0099daecc5ae9693/userdata/shm major:0 minor:809 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/88d505327814e64c05d565f5816ae97892418500facf7fd5799add8d17c8b232/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/88d505327814e64c05d565f5816ae97892418500facf7fd5799add8d17c8b232/userdata/shm major:0 minor:306 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f214df22b3108e2647e81c2065b29247bcd16b9d799cc094aa75352fed33b39/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f214df22b3108e2647e81c2065b29247bcd16b9d799cc094aa75352fed33b39/userdata/shm major:0 minor:561 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91da701859683e09bbd69c5ea46a27c0da629a0940ac397355b74f2e9d28cde0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91da701859683e09bbd69c5ea46a27c0da629a0940ac397355b74f2e9d28cde0/userdata/shm major:0 minor:808 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95171c03fc7a28cf1acc6d32a99defa7481a42e7b61b5f5262deb3933da18ccc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95171c03fc7a28cf1acc6d32a99defa7481a42e7b61b5f5262deb3933da18ccc/userdata/shm major:0 minor:409 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9edfccecec2ce83d19d6f04be10c237136ad19be78d3969b003d45d0dd5cdd53/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9edfccecec2ce83d19d6f04be10c237136ad19be78d3969b003d45d0dd5cdd53/userdata/shm major:0 minor:633 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a5f412f714f8914221964a888babc262e21046db3f1580b324543c6c04c3fbd9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a5f412f714f8914221964a888babc262e21046db3f1580b324543c6c04c3fbd9/userdata/shm major:0 minor:1080 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9d070f228bb3ad86327355b7631ce9d61aa33df655c8f354c0c3cf73e6bbfbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9d070f228bb3ad86327355b7631ce9d61aa33df655c8f354c0c3cf73e6bbfbd/userdata/shm major:0 minor:1194 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b273b68e51f7dadf9df698a73d4ce02f6814882dc729b2c52672e829413c2a75/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b273b68e51f7dadf9df698a73d4ce02f6814882dc729b2c52672e829413c2a75/userdata/shm major:0 minor:558 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b42865dcd2dae3a2390972bbf267cd467643023a4c8d222016e0b44a61943afc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b42865dcd2dae3a2390972bbf267cd467643023a4c8d222016e0b44a61943afc/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c1e8680fcd730f22fac4464d7e2e919f0d68259c2072f7e2c075736c7c9f888d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c1e8680fcd730f22fac4464d7e2e919f0d68259c2072f7e2c075736c7c9f888d/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c1eb0a6c1ab17257358eeeb97010b410797c8ba9fd08a44d4ff2e76c51c917e0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c1eb0a6c1ab17257358eeeb97010b410797c8ba9fd08a44d4ff2e76c51c917e0/userdata/shm major:0 minor:621 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c28524ce9ebb8a89b175cc98bd1b1e9d4101033acc5d2f2a96632789a23b70d2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c28524ce9ebb8a89b175cc98bd1b1e9d4101033acc5d2f2a96632789a23b70d2/userdata/shm major:0 minor:557 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c62bfe26cbaa5afe7741b2ad05574cf96716a998721d303299c76986059ad0d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c62bfe26cbaa5afe7741b2ad05574cf96716a998721d303299c76986059ad0d0/userdata/shm major:0 minor:843 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c6f3ba629d26f9cdeb3d7860a7b0f64e21de0f0dc77a559ebfda83ee3654ece0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c6f3ba629d26f9cdeb3d7860a7b0f64e21de0f0dc77a559ebfda83ee3654ece0/userdata/shm major:0 minor:1014 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cab7f3dd54d1235751e5892dcbba68fcd420bde6fbdec0b1e4ae52ac6f473f51/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cab7f3dd54d1235751e5892dcbba68fcd420bde6fbdec0b1e4ae52ac6f473f51/userdata/shm major:0 minor:1046 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d1339a30e998845d2411b5c92f3883b1457216fd5491cd19b8b7f3a77576f95c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d1339a30e998845d2411b5c92f3883b1457216fd5491cd19b8b7f3a77576f95c/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3d8011493c530c7726e87839672927a640cefde6cc363dd89bea6af846b7008/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3d8011493c530c7726e87839672927a640cefde6cc363dd89bea6af846b7008/userdata/shm major:0 minor:374 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d4eadecdf9a3a2b8f4413e3b5de43801a78ed52767f124bb85a08953e8d985e4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d4eadecdf9a3a2b8f4413e3b5de43801a78ed52767f124bb85a08953e8d985e4/userdata/shm major:0 minor:778 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d52b6a2cf90645c7d7adbd4e26631b5105d0e2c63496bcbe09fc57752e328d79/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d52b6a2cf90645c7d7adbd4e26631b5105d0e2c63496bcbe09fc57752e328d79/userdata/shm major:0 minor:741 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d9f6591fd179f080128bbdecaa328db0f824489c21d34724dd9ae09d41418d2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d9f6591fd179f080128bbdecaa328db0f824489c21d34724dd9ae09d41418d2c/userdata/shm major:0 minor:568 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185/userdata/shm major:0 minor:243 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e8459c0c82ddc5a6e864e94a80eda98d197ebe97363ec23c2d9041a3ae2c51bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e8459c0c82ddc5a6e864e94a80eda98d197ebe97363ec23c2d9041a3ae2c51bb/userdata/shm major:0 minor:846 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ea77244427e21f197396c97f841977fffdf6891b18e6c927b783ae59d8efff47/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ea77244427e21f197396c97f841977fffdf6891b18e6c927b783ae59d8efff47/userdata/shm major:0 minor:1058 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ea87280c188a798da95cc9ce18e125174ff632d343ee3e8d6a214207d7770e1e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ea87280c188a798da95cc9ce18e125174ff632d343ee3e8d6a214207d7770e1e/userdata/shm major:0 minor:572 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1fbd15a6f55efb9df34e794516a926fbd6cd9758a5312e86f1eb743de9e13b5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1fbd15a6f55efb9df34e794516a926fbd6cd9758a5312e86f1eb743de9e13b5/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2c2ecd78b0b095cca6d610f53e1ff83eedc17b6a054e2d1a3484b11ec8181f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2c2ecd78b0b095cca6d610f53e1ff83eedc17b6a054e2d1a3484b11ec8181f6/userdata/shm major:0 minor:48 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3e26fe3d2ca6df6dc0161bddc1b304ebbc7fa75a6def1dd10d9bdbbd5e6b79d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3e26fe3d2ca6df6dc0161bddc1b304ebbc7fa75a6def1dd10d9bdbbd5e6b79d/userdata/shm major:0 minor:1177 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb93ae4071b146962466e96a3daecbc8c529d6e1a15ad1edfa1a28da5c544561/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb93ae4071b146962466e96a3daecbc8c529d6e1a15ad1edfa1a28da5c544561/userdata/shm major:0 minor:562 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e23989-853e-4b49-ba0f-1961d64ae3a3/volumes/kubernetes.io~projected/kube-api-access-qwsfl:{mountpoint:/var/lib/kubelet/pods/04e23989-853e-4b49-ba0f-1961d64ae3a3/volumes/kubernetes.io~projected/kube-api-access-qwsfl major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e23989-853e-4b49-ba0f-1961d64ae3a3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/04e23989-853e-4b49-ba0f-1961d64ae3a3/volumes/kubernetes.io~secret/serving-cert major:0 minor:753 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~projected/kube-api-access-ltlf6:{mountpoint:/var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~projected/kube-api-access-ltlf6 major:0 minor:1078 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1085 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~projected/kube-api-access-5ngk7:{mountpoint:/var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~projected/kube-api-access-5ngk7 major:0 minor:103 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~secret/metrics-tls major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~projected/kube-api-access-x9w7l:{mountpoint:/var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~projected/kube-api-access-x9w7l major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~secret/webhook-cert major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~projected/kube-api-access-gjq4w:{mountpoint:/var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~projected/kube-api-access-gjq4w major:0 minor:774 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:773 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~secret/webhook-cert major:0 minor:772 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18921497-d8ed-42d8-bf3c-a027566ebe85/volumes/kubernetes.io~projected/kube-api-access-vtz82:{mountpoint:/var/lib/kubelet/pods/18921497-d8ed-42d8-bf3c-a027566ebe85/volumes/kubernetes.io~projected/kube-api-access-vtz82 major:0 minor:45 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18921497-d8ed-42d8-bf3c-a027566ebe85/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/18921497-d8ed-42d8-bf3c-a027566ebe85/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:489 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~projected/kube-api-access-cj9fr:{mountpoint:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~projected/kube-api-access-cj9fr major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~projected/kube-api-access-dqldd:{mountpoint:/var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~projected/kube-api-access-dqldd major:0 minor:485 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/encryption-config major:0 minor:483 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/etcd-client major:0 minor:484 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/serving-cert major:0 minor:442 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29ba6765-61c9-4f78-8f44-570418000c5c/volumes/kubernetes.io~projected/kube-api-access-xchll:{mountpoint:/var/lib/kubelet/pods/29ba6765-61c9-4f78-8f44-570418000c5c/volumes/kubernetes.io~projected/kube-api-access-xchll major:0 minor:332 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31a92270-efed-44fe-871e-90333235e85f/volumes/kubernetes.io~projected/kube-api-access-8zhfh:{mountpoint:/var/lib/kubelet/pods/31a92270-efed-44fe-871e-90333235e85f/volumes/kubernetes.io~projected/kube-api-access-8zhfh major:0 minor:838 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31a92270-efed-44fe-871e-90333235e85f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/31a92270-efed-44fe-871e-90333235e85f/volumes/kubernetes.io~secret/serving-cert major:0 minor:816 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/336e741d-ac9a-4b94-9fbb-c9010e37c2d0/volumes/kubernetes.io~projected/kube-api-access-hbsfs:{mountpoint:/var/lib/kubelet/pods/336e741d-ac9a-4b94-9fbb-c9010e37c2d0/volumes/kubernetes.io~projected/kube-api-access-hbsfs major:0 minor:992 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/336e741d-ac9a-4b94-9fbb-c9010e37c2d0/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/336e741d-ac9a-4b94-9fbb-c9010e37c2d0/volumes/kubernetes.io~secret/proxy-tls major:0 minor:977 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/volumes/kubernetes.io~projected/ca-certs major:0 minor:479 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/volumes/kubernetes.io~projected/kube-api-access-fbsgx:{mountpoint:/var/lib/kubelet/pods/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/volumes/kubernetes.io~projected/kube-api-access-fbsgx major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~projected/kube-api-access-2msp8:{mountpoint:/var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~projected/kube-api-access-2msp8 major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:547 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~projected/kube-api-access-4hn9w:{mountpoint:/var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~projected/kube-api-access-4hn9w major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~secret/srv-cert major:0 minor:550 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~projected/kube-api-access-rgs9m:{mountpoint:/var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~projected/kube-api-access-rgs9m major:0 minor:1045 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~secret/certs major:0 minor:1037 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1036 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b/volumes/kubernetes.io~projected/kube-api-access-jnspk:{mountpoint:/var/lib/kubelet/pods/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b/volumes/kubernetes.io~projected/kube-api-access-jnspk major:0 minor:833 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b/volumes/kubernetes.io~secret/proxy-tls major:0 minor:812 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~projected/kube-api-access-4r7hx:{mountpoint:/var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~projected/kube-api-access-4r7hx major:0 minor:1077 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1075 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~projected/ca-certs major:0 minor:476 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~projected/kube-api-access-c52pj:{mountpoint:/var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~projected/kube-api-access-c52pj major:0 minor:472 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:497 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/495e0cff-fca8-4dad-9247-2fc0e7ce86fc/volumes/kubernetes.io~projected/kube-api-access-5qrqx:{mountpoint:/var/lib/kubelet/pods/495e0cff-fca8-4dad-9247-2fc0e7ce86fc/volumes/kubernetes.io~projected/kube-api-access-5qrqx major:0 minor:885 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/495e0cff-fca8-4dad-9247-2fc0e7ce86fc/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/495e0cff-fca8-4dad-9247-2fc0e7ce86fc/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:884 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/volumes/kubernetes.io~projected/kube-api-access-2m5wf:{mountpoint:/var/lib/kubelet/pods/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/volumes/kubernetes.io~projected/kube-api-access-2m5wf major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/volumes/kubernetes.io~secret/serving-cert major:0 minor:755 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52e32e2d-33ab-4351-ae8a-80acd6077d70/volumes/kubernetes.io~projected/kube-api-access-dm6nf:{mountpoint:/var/lib/kubelet/pods/52e32e2d-33ab-4351-ae8a-80acd6077d70/volumes/kubernetes.io~projected/kube-api-access-dm6nf major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~projected/kube-api-access-9q8l2:{mountpoint:/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~projected/kube-api-access-9q8l2 major:0 minor:1139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1133 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~projected/kube-api-access-n959l:{mountpoint:/var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~projected/kube-api-access-n959l major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~projected/kube-api-access major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~projected/kube-api-access-mlp7w:{mountpoint:/var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~projected/kube-api-access-mlp7w major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:541 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/68465463-5f2a-4e74-9c34-2706a185f7ea/volumes/kubernetes.io~projected/kube-api-access-gqlhh:{mountpoint:/var/lib/kubelet/pods/68465463-5f2a-4e74-9c34-2706a185f7ea/volumes/kubernetes.io~projected/kube-api-access-gqlhh major:0 minor:732 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6fb1f871-9c24-48a1-a15a-a636b5bb687d/volumes/kubernetes.io~projected/kube-api-access-wxxcn:{mountpoint:/var/lib/kubelet/pods/6fb1f871-9c24-48a1-a15a-a636b5bb687d/volumes/kubernetes.io~projected/kube-api-access-wxxcn major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~projected/kube-api-access-dfjmx:{mountpoint:/var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~projected/kube-api-access-dfjmx major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/kube-api-access-47p9x:{mountpoint:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/kube-api-access-47p9x major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:549 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/866c259c-7661-4a80-873b-6fd625218665/volumes/kubernetes.io~projected/kube-api-access-ftdvp:{mountpoint:/var/lib/kubelet/pods/866c259c-7661-4a80-873b-6fd625218665/volumes/kubernetes.io~projected/kube-api-access-ftdvp major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~projected/kube-api-access major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d89af2f-47f5-4ee5-a790-e162c2dee3ce/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/8d89af2f-47f5-4ee5-a790-e162c2dee3ce/volumes/kubernetes.io~projected/kube-api-access major:0 minor:625 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d89af2f-47f5-4ee5-a790-e162c2dee3ce/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8d89af2f-47f5-4ee5-a790-e162c2dee3ce/volumes/kubernetes.io~secret/serving-cert major:0 minor:630 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8/volumes/kubernetes.io~projected/kube-api-access-d2bwv:{mountpoint:/var/lib/kubelet/pods/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8/volumes/kubernetes.io~projected/kube-api-access-d2bwv major:0 minor:372 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~projected/kube-api-access-rpxfc:{mountpoint:/var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~projected/kube-api-access-rpxfc major:0 minor:1076 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1074 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1084 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92542f7c-182b-45a8-bbf3-00e99ba7acee/volumes/kubernetes.io~projected/kube-api-access-4lv7n:{mountpoint:/var/lib/kubelet/pods/92542f7c-182b-45a8-bbf3-00e99ba7acee/volumes/kubernetes.io~projected/kube-api-access-4lv7n major:0 minor:747 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~projected/kube-api-access-8w58l:{mountpoint:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~projected/kube-api-access-8w58l major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/etcd-client major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/kube-api-access-tk9jq:{mountpoint:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/kube-api-access-tk9jq major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~secret/metrics-tls major:0 minor:551 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~projected/kube-api-access-zj9rk:{mountpoint:/var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~projected/kube-api-access-zj9rk major:0 minor:726 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~secret/cert major:0 minor:460 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:801 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/998cabe9-d479-439f-b1c0-1d8c49aefeb9/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/998cabe9-d479-439f-b1c0-1d8c49aefeb9/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1010 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a268d595-18c2-43a2-8ed5-eb64c76c490f/volumes/kubernetes.io~projected/kube-api-access-hfzdp:{mountpoint:/var/lib/kubelet/pods/a268d595-18c2-43a2-8ed5-eb64c76c490f/volumes/kubernetes.io~projected/kube-api-access-hfzdp major:0 minor:760 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a7dab805-612b-404c-ab97-8cee927169db/volumes/kubernetes.io~projected/kube-api-access-pjrfz:{mountpoint:/var/lib/kubelet/pods/a7dab805-612b-404c-ab97-8cee927169db/volumes/kubernetes.io~projected/kube-api-access-pjrfz major:0 minor:920 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a7dab805-612b-404c-ab97-8cee927169db/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/a7dab805-612b-404c-ab97-8cee927169db/volumes/kubernetes.io~secret/proxy-tls major:0 minor:912 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~projected/kube-api-access-zkfql:{mountpoint:/var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~projected/ Mar 18 09:04:07.101239 master-0 kubenswrapper[28766]: kube-api-access-zkfql major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/default-certificate major:0 minor:1011 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1005 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/stats-auth major:0 minor:1009 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~projected/kube-api-access-dxvk7:{mountpoint:/var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~projected/kube-api-access-dxvk7 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~projected/kube-api-access-pz26d:{mountpoint:/var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~projected/kube-api-access-pz26d major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~secret/srv-cert major:0 minor:553 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b35ab145-16a7-4ef1-86e8-0afb6ff469fd/volumes/kubernetes.io~projected/kube-api-access-tp77s:{mountpoint:/var/lib/kubelet/pods/b35ab145-16a7-4ef1-86e8-0afb6ff469fd/volumes/kubernetes.io~projected/kube-api-access-tp77s major:0 minor:663 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b35ab145-16a7-4ef1-86e8-0afb6ff469fd/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/b35ab145-16a7-4ef1-86e8-0afb6ff469fd/volumes/kubernetes.io~secret/metrics-tls major:0 minor:664 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~projected/kube-api-access-bpj79:{mountpoint:/var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~projected/kube-api-access-bpj79 major:0 minor:450 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/encryption-config major:0 minor:427 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/etcd-client major:0 minor:449 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/serving-cert major:0 minor:448 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9768e50-c883-47b0-b319-851fa53ac19a/volumes/kubernetes.io~projected/kube-api-access-bw5tw:{mountpoint:/var/lib/kubelet/pods/b9768e50-c883-47b0-b319-851fa53ac19a/volumes/kubernetes.io~projected/kube-api-access-bw5tw major:0 minor:831 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9768e50-c883-47b0-b319-851fa53ac19a/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/b9768e50-c883-47b0-b319-851fa53ac19a/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:818 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~projected/kube-api-access-8lsw9:{mountpoint:/var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~projected/kube-api-access-8lsw9 major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:431 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:432 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~projected/kube-api-access-lw27k:{mountpoint:/var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~projected/kube-api-access-lw27k major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ccf74af5-d4fd-4ed3-9784-42397ea798c5/volumes/kubernetes.io~projected/kube-api-access-p9qkd:{mountpoint:/var/lib/kubelet/pods/ccf74af5-d4fd-4ed3-9784-42397ea798c5/volumes/kubernetes.io~projected/kube-api-access-p9qkd major:0 minor:467 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ccf74af5-d4fd-4ed3-9784-42397ea798c5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/ccf74af5-d4fd-4ed3-9784-42397ea798c5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:463 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0272f7c-bedc-44cf-9790-88e10e6dda03/volumes/kubernetes.io~projected/kube-api-access-ttnk9:{mountpoint:/var/lib/kubelet/pods/d0272f7c-bedc-44cf-9790-88e10e6dda03/volumes/kubernetes.io~projected/kube-api-access-ttnk9 major:0 minor:433 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d0272f7c-bedc-44cf-9790-88e10e6dda03/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/d0272f7c-bedc-44cf-9790-88e10e6dda03/volumes/kubernetes.io~secret/cert major:0 minor:329 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/volumes/kubernetes.io~projected/kube-api-access-czm78:{mountpoint:/var/lib/kubelet/pods/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/volumes/kubernetes.io~projected/kube-api-access-czm78 major:0 minor:731 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:728 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~projected/kube-api-access-x5q4t:{mountpoint:/var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~projected/kube-api-access-x5q4t major:0 minor:1057 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1056 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1052 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~projected/kube-api-access-x6zq8:{mountpoint:/var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~projected/kube-api-access-x6zq8 major:0 minor:120 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~secret/metrics-certs major:0 minor:552 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e025d334-20e7-491f-8027-194251398747/volumes/kubernetes.io~projected/kube-api-access-bfzdk:{mountpoint:/var/lib/kubelet/pods/e025d334-20e7-491f-8027-194251398747/volumes/kubernetes.io~projected/kube-api-access-bfzdk major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e025d334-20e7-491f-8027-194251398747/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/e025d334-20e7-491f-8027-194251398747/volumes/kubernetes.io~secret/metrics-tls major:0 minor:554 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0bb044f-5a4e-4981-8084-91348ce1a56a/volumes/kubernetes.io~projected/kube-api-access-ks4jl:{mountpoint:/var/lib/kubelet/pods/e0bb044f-5a4e-4981-8084-91348ce1a56a/volumes/kubernetes.io~projected/kube-api-access-ks4jl major:0 minor:1193 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0bb044f-5a4e-4981-8084-91348ce1a56a/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/e0bb044f-5a4e-4981-8084-91348ce1a56a/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0d127be-2d13-449b-915b-2d49052baf02/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e0d127be-2d13-449b-915b-2d49052baf02/volumes/kubernetes.io~projected/kube-api-access major:0 minor:798 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~projected/kube-api-access-vfjgn:{mountpoint:/var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~projected/kube-api-access-vfjgn major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~projected/kube-api-access-4fql4:{mountpoint:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~projected/kube-api-access-4fql4 major:0 minor:1176 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1169 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1174 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:1173 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1175 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e64ea71a-1e89-409a-9607-4d3cea093643/volumes/kubernetes.io~projected/kube-api-access-b689k:{mountpoint:/var/lib/kubelet/pods/e64ea71a-1e89-409a-9607-4d3cea093643/volumes/kubernetes.io~projected/kube-api-access-b689k major:0 minor:456 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e64ea71a-1e89-409a-9607-4d3cea093643/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/e64ea71a-1e89-409a-9607-4d3cea093643/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:453 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~projected/kube-api-access-dwrdc:{mountpoint:/var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~projected/kube-api-access-dwrdc major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:546 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~projected/kube-api-access-svdhs:{mountpoint:/var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~projected/kube-api-access-svdhs major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~projected/kube-api-access-glt6c:{mountpoint:/var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~projected/kube-api-access-glt6c major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f650e6f0-fb74-4083-a7a9-fa4df513108f/volumes/kubernetes.io~projected/kube-api-access-tsc6v:{mountpoint:/var/lib/kubelet/pods/f650e6f0-fb74-4083-a7a9-fa4df513108f/volumes/kubernetes.io~projected/kube-api-access-tsc6v major:0 minor:1013 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f65344cd-8571-4a78-927f-eec46ec1af51/volumes/kubernetes.io~projected/kube-api-access-djq7n:{mountpoint:/var/lib/kubelet/pods/f65344cd-8571-4a78-927f-eec46ec1af51/volumes/kubernetes.io~projected/kube-api-access-djq7n major:0 minor:754 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:379 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~empty-dir/tmp major:0 minor:496 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~projected/kube-api-access-6bzxp:{mountpoint:/var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~projected/kube-api-access-6bzxp major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f9fa104a-4979-4023-8d7e-a965f11bc7db/volumes/kubernetes.io~projected/kube-api-access-jlwg9:{mountpoint:/var/lib/kubelet/pods/f9fa104a-4979-4023-8d7e-a965f11bc7db/volumes/kubernetes.io~projected/kube-api-access-jlwg9 major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa8f1797-0219-49fe-82b5-7416cc481c3a/volumes/kubernetes.io~projected/kube-api-access-njbjp:{mountpoint:/var/lib/kubelet/pods/fa8f1797-0219-49fe-82b5-7416cc481c3a/volumes/kubernetes.io~projected/kube-api-access-njbjp major:0 minor:408 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa8f1797-0219-49fe-82b5-7416cc481c3a/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/fa8f1797-0219-49fe-82b5-7416cc481c3a/volumes/kubernetes.io~secret/signing-key major:0 minor:404 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc289a83-9a2e-404b-b148-605639362703/volumes/kubernetes.io~projected/kube-api-access-l7lrl:{mountpoint:/var/lib/kubelet/pods/fc289a83-9a2e-404b-b148-605639362703/volumes/kubernetes.io~projected/kube-api-access-l7lrl major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc5a9875-d97e-4371-a15d-a1f43b85abce/volumes/kubernetes.io~projected/kube-api-access-mvlvd:{mountpoint:/var/lib/kubelet/pods/fc5a9875-d97e-4371-a15d-a1f43b85abce/volumes/kubernetes.io~projected/kube-api-access-mvlvd major:0 minor:473 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc5a9875-d97e-4371-a15d-a1f43b85abce/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/fc5a9875-d97e-4371-a15d-a1f43b85abce/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:464 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~projected/kube-api-access-s8prf:{mountpoint:/var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~projected/kube-api-access-s8prf major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4/volumes/kubernetes.io~projected/kube-api-access-hpl2c:{mountpoint:/var/lib/kubelet/pods/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4/volumes/kubernetes.io~projected/kube-api-access-hpl2c major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffc5379c-651f-490c-90f4-1285b9093596/volumes/kubernetes.io~projected/kube-api-access-4vfrs:{mountpoint:/var/lib/kubelet/pods/ffc5379c-651f-490c-90f4-1285b9093596/volumes/kubernetes.io~projected/kube-api-access-4vfrs major:0 minor:832 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffc5379c-651f-490c-90f4-1285b9093596/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/ffc5379c-651f-490c-90f4-1285b9093596/volumes/kubernetes.io~secret/cert major:0 minor:830 fsType:tmpfs blockSize:0} overlay_0-1020:{mountpoint:/var/lib/containers/storage/overlay/7ea5ee1e6a99e7c2bb398c429cb45b169d4d3eb224092b265d4788fbb0b236f7/merged major:0 minor:1020 fsType:overlay blockSize:0} overlay_0-1022:{mountpoint:/var/lib/containers/storage/overlay/b3be247e779ea0bee75848796f452cbd264e768f94b1ec9587f73f386fb48b64/merged major:0 minor:1022 fsType:overlay blockSize:0} overlay_0-1024:{mountpoint:/var/lib/containers/storage/overlay/0d9ef2e216489a20890c78beb848a09ec27b9714e5a1091dc2b0654f90834bb0/merged major:0 minor:1024 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/0b88ebd383d2e3b37ea821e74b17d90d6313ae54a209725e8c1ac3361ae1c463/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-1027:{mountpoint:/var/lib/containers/storage/overlay/497393acdc4c647ab88e35d144f7a4631273da6c20f16c178c94519e654e9081/merged major:0 minor:1027 fsType:overlay blockSize:0} overlay_0-1035:{mountpoint:/var/lib/containers/storage/overlay/cb2c54a5aaac3758a86e240014ce56e00c17a957ede72b3b7d4a39a411648ad9/merged major:0 minor:1035 fsType:overlay blockSize:0} overlay_0-1042:{mountpoint:/var/lib/containers/storage/overlay/f1c7d43099153e0a4393d7a13a2c78910582efffcd8a2ad391788cc4b9564971/merged major:0 minor:1042 fsType:overlay blockSize:0} overlay_0-1048:{mountpoint:/var/lib/containers/storage/overlay/2e4fab05d58e4665b5128a035ecefa1c9a975a583ad63983b46df42796a8b443/merged major:0 minor:1048 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/77936bf2ffbdb52425ab2f32b4e445f35b436f83cfcd00c8fa91fa4d72412a63/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1060:{mountpoint:/var/lib/containers/storage/overlay/d1db18b9b2f21f4b20e7c91c39aceec98a08381c5420f6d8dd189c7f575b0f7f/merged major:0 minor:1060 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/638f0c6c1bfc5135c4da2848b447b8489b4b06ffac286ff3c0f431aa03e8b617/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-1064:{mountpoint:/var/lib/containers/storage/overlay/57c2d7ef6e46d29460a16d7a5513c0b4c796927758f10f725588dea930eda15c/merged major:0 minor:1064 fsType:overlay blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/51511e2c4ec6c149fafad6b6fdf93f73f3c58315ba5efd70699693456e4413d3/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/348a77cb8a185c15670ef4cf9af88ebf37e70c01d0cbb7d834567bf4b532618a/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1090:{mountpoint:/var/lib/containers/storage/overlay/489716093a7df7d61f51d92fd37d6d124c030aad74e0852c693d6a144d108173/merged major:0 minor:1090 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/dae88c24ec6ac5981dee98bb769e2bc82935a97bda3ca21500e311d9135356f2/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1094:{mountpoint:/var/lib/containers/storage/overlay/23d288272507523c24a35f025db024ec75645828c863dfa03d7730f633912ba1/merged major:0 minor:1094 fsType:overlay blockSize:0} overlay_0-1096:{mountpoint:/var/lib/containers/storage/overlay/9cec8664bb6e0b8304b92144b13ff00648dec522bf2a267029f9ac5d53ff6592/merged major:0 minor:1096 fsType:overlay blockSize:0} overlay_0-1102:{mountpoint:/var/lib/containers/storage/overlay/8dc904af91217a9880692051173431b62d84a7e23299e5f09b98f728847b9e25/merged major:0 minor:1102 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/3acc033ac246fb9c675be653047d72544d388ac6d1ac90f6c0a29320a5da652d/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-1109:{mountpoint:/var/lib/containers/storage/overlay/65fdad33d512817a63b6df8ed661d166d167934cce91b2c059f7f99394500062/merged major:0 minor:1109 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/0151f629f33c98f0a9d7a41bb76936694c3784833de895e045717f4d9575bcbe/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-1110:{mountpoint:/var/lib/containers/storage/overlay/1b4f11f4c63e854e84f6dbfba5796e9c6b9acd9c46fea423cab1d07aaa0cbf96/merged major:0 minor:1110 fsType:overlay blockSize:0} overlay_0-1123:{mountpoint:/var/lib/containers/storage/overlay/23d8d326b8f1405fd59c3d44f779e86fd7a5ffbfda62be030ae9892364ddc886/merged major:0 minor:1123 fsType:overlay blockSize:0} overlay_0-1127:{mountpoint:/var/lib/containers/storage/overlay/009283de8531232445ca115cd314fc883ddc765fe4fc09f574e78b4eddff4bbc/merged major:0 minor:1127 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/54b490d2092b7a20b2b46cc6d1e80e43429858353d1b68e773c76e183c747859/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-1142:{mountpoint:/var/lib/containers/storage/overlay/1369891cef934335aeab197223a32127d7e0618be5a5b580dcb13c211e95df20/merged major:0 minor:1142 fsType:overlay blockSize:0} overlay_0-1144:{mountpoint:/var/lib/containers/storage/overlay/27cc2684b9f915b9953d95fff465422cea13c62cd52973def6564f2331d669e7/merged major:0 minor:1144 fsType:overlay blockSize:0} overlay_0-1151:{mountpoint:/var/lib/containers/storage/overlay/2341f640a336d18fbe10f47f38e4e6531907b4d3a02b5f33e444f76afe3f6ae1/merged major:0 minor:1151 fsType:overlay blockSize:0} overlay_0-1153:{mountpoint:/var/lib/containers/storage/overlay/fce3c832a5e52d1676b1cd26aa400f5de42777381c5e8fbf0b716056f873d1ac/merged major:0 minor:1153 fsType:overlay blockSize:0} overlay_0-1161:{mountpoint:/var/lib/containers/storage/overlay/07eaff8beea88362a8af8816daed78b6a4c3ce6914645b5b5bb036c096e53a92/merged major:0 minor:1161 fsType:overlay blockSize:0} overlay_0-1179:{mountpoint:/var/lib/containers/storage/overlay/6904669de3a89aab9e48efcfa7942adce2d48369c26699e9afbd33ee859e0db6/merged major:0 minor:1179 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/d3ad0a186c7f05b163738a7eb9dc7c09a90cd547d038fa0b70034e1cd2072517/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-1196:{mountpoint:/var/lib/containers/storage/overlay/e2163d99a6d7587a036d6bc787807b9771aecc0eb8b0f059a66834d3881b91f2/merged major:0 minor:1196 fsType:overlay blockSize:0} overlay_0-1198:{mountpoint:/var/lib/containers/storage/overlay/e0834c0d65c96a9017c652e4fb08a772ae7310f5a6e5a4f22663a977acd3b7cb/merged major:0 minor:1198 fsType:overlay blockSize:0} overlay_0-1204:{mountpoint:/var/lib/containers/storage/overlay/d1b3a405c964f2b27ed016b58a8ec61b205fc39c3cb5ec542691a9c00f38cff0/merged major:0 minor:1204 fsType:overlay blockSize:0} overlay_0-1206:{mountpoint:/var/lib/containers/storage/overlay/ed8275512cab5ec5c10e8d035bad11adfc7feadb9363aa0df4f8bc43f18362c8/merged major:0 minor:1206 fsType:overlay blockSize:0} overlay_0-1212:{mountpoint:/var/lib/containers/storage/overlay/717423ab41fa6c0fbf68c5b131a6074f6b252ee69a6e9190129d2ec59cb1d601/merged major:0 minor:1212 fsType:overlay blockSize:0} overlay_0-1217:{mountpoint:/var/lib/containers/storage/overlay/f730c8f7d186f87ab013c825e82e756eb4b2c6e103f08940cefc858227cc46c3/merged major:0 minor:1217 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/51dccf71c3a5959a7d3a9538de0b44cee3f9ffc7d40e7273b44498fd8635150c/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/c25848b6c5cb7d3ce35d01a03a5e51d1d2f15b5d996ef20673c6372ff6044e30/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/84fc2447ba494ba661a1b8d790db3f8c92dd408051f3d9197dd9d4b23279567e/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/06f94584812daf1238daca4aa49fcdb97f07104c9857082398e682a7e4cf2852/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/5d56da972d85bd0fdae711451e4093b7fb4ea8e9a5d5991d0a8e9c0b7661260d/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/1ccc57dcf734702732eee7f984985aeb4b41d07af9908836e7bd004973b11cb8/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/608e249b2c66ae516ecb4df05b2de931eb4f5172c483f257b9ff6200682a0ce3/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/c9fc264dea0ddc4568964b1e03b13cf4e5d3df2c83c9609430cb661e4f5193fd/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/7c5901030f143593f23dbe0efbd940aaf1bc4f314264c168185355f94f532dfc/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/0879e78bdaea787dd6fa51ebf4c8417b07d6b51ec9ab6cc63a442a2b1395b7e3/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/9e6adefc0cb22622d006c6d691dea46f0cd83c5433d2f06e0710cebc5aa326aa/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-167:{mountpoint:/var/lib/containers/storage/overlay/7b929460980e12ed308c7c050594aac4dc9d11bf85cc3c57b46bb9184eb38440/merged major:0 minor:167 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/0f125902d2a837b357d6626dd8e0ee59e98115855c2bb6652ba34f5f9bb20bfa/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/e5818b9ecd157e1b8625096642c9bc7452c7ef1a28e0a293f67532b1e61725c3/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/ad856f305e0ffc19e4b561dc6e8d714ce91d5fbd1346a74b5182e66e1d29cc4e/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/155dae4b1e369ccde04cb6a3e67c97deacb81dd588369d60dd0ac710c6c016f4/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/e9db5fc28ca52751496c19a6eb6ef9e8b659d240e872e129839abf9d752756f5/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/a66dee84642dd839a1d836d4418b0a45e590b1f167539c52e7f009b0dcd35aa5/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/c75f0faa325606ff2195997ef58781cc8d9ee1077ab76d4d27b9cc747ac3b260/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/196f9c252a14daac552f0688cea6e8de155f5725fa7dc417dff39a19e624a546/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-263:{mountpoint:/var/lib/containers/storage/overlay/1bd651bb21fab0d929c192050635adf740d5b43fe80e1efb5f0187e82097e076/merged major:0 minor:263 fsType:overlay blockSize:0} overlay_0-264:{mountpoint:/var/lib/containers/storage/overlay/371a7eddc835a3c39fdf2654c6be820f5a9b8b189e88321527103397fe6a5fab/merged major:0 minor:264 fsType:overlay blockSize:0} overlay_0-271:{mountpoint:/var/lib/containers/storage/overlay/3e28e0cde913974aed071217113e56834df48e7872e42090eb288fc3f1bb09fb/merged major:0 minor:271 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/ad30cac5a6bdb8411a1e3c5045136d6883bbdeeca510307fbb55b70e4d808f27/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/7a5ed6a4cd0a18a67a910833696f7ab3b1a705e0a09d66172ba94e3504544b05/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/27e4a3a239e25d6d8a1357f2335023a4fc238592ce32177f3b24e37145145ef3/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/2a4dff9bcd9592e74f08f37d55bdd055e8109852ce6a0ab247e730c3169660f5/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/bb8b29fcfc8b61164baa54562363f2548396770d4622d6a9498c700c57ca4129/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/af71e9b80b157f9a2b206e3a2f29d4053bf311cffc5790a280042bf049041374/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/e6f4de0fb909434a273de4322398741c7f0779eef9066140b4f07742eff10976/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/abecdcd1e05363f2a3fe4f73b1e8e46d5e02439c4cd86626fafe7ce3afa96797/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/fd1718655fea47539b218e27cf58a997566a62d1035078d878b75d54e9fc45a6/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/c171dd71fa8497a454ab6e4967d9d06c2476e449be4d4333b244ddc9fc9f8c44/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/153371f9dbef15221c909ebf61296d40c94b927eb169f643a2337cec895aaa7b/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/0c8515ce3656b24bc76ec03311ceec2a059f9a8a08c5356a1dc69267643e88bc/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/f5234f810bb0f044bb805594c8486d6942fa879600bfdfa9d1d3c8edd11bfa53/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/89fafc1999741d95cb15851f6d025e3200ae2f5be471bb333d1463df3d43e316/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/88b9d366f44d9f3009f61f61c979314debd9447474a776fca32656026e9c7704/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/1ee41a3957b09beadfdf1edde020d381c83fd3b8de4d6b588a91d2d53f9791d3/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/fd53c8fd6570f3fc1c8eb121380f94e27152ae1f0eceef95194caaf5652d3980/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/2b206707197ad19a4f5448471325c3a3c3a5ddb1abf42ba263713b66329fb7ae/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/b1ce7b59a43f0fc97d3a5671356dfe6895398c2632be07b05f7a1d2d1c8fd1fc/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/47524a5e26638da9a4477c737673c8c5db24ba4d01ca78f523bbc3a38875f946/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/1e68df756f0d3c9e2bdb781a7ccc7f68d7d0c18884944d84a27e20eefa1449e4/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/e9057ed967d152b5f0752e5bfce9a02cd8c5aa880d2d75b32e1d39ab23593ce0/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-338:{mountpoint:/var/lib/containers/storage/overlay/0d00d285c67a9a29350e77d75f4e8584b69561d36c8c265f3bceaa7328dd686b/merged major:0 minor:338 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/e24f5705c8aaa413b3e882f5f908fc041394acbf0a006f42bc42fa12a720117f/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/295e1a7ab1f11cf1a4267df83de386f67f22db2da7f8e1a0fd7b51fc7bfa21e8/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/ab948b5bf95697d8ec27224896dd7df9f058f518b8c12cf9812bff8e967ed51b/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/6be50a2cc46d9d3742fef34281cb38a875521030e7d27dcbbf8359acdcfe2c9a/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-371:{mountpoint:/var/lib/containers/storage/overlay/99c9e036b821b9f1895e136c660ece54c52ed59de8397c71d575bed43f177519/merged major:0 minor:371 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/abc98a8df66f1432ec51bc670e926aaf31f4ddc0f3ee5e6240378c98408ed490/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-380:{mountpoint:/var/lib/containers/storage/overlay/4677abb42854abec9f55a79b8893905b533c11dbd10f21be4534b089a970cf9f/merged major:0 minor:380 fsType:overlay blockSize:0} overlay_0-383:{mountpoint:/var/lib/containers/storage/overlay/707e62d7e68b37b35b879c9e2a50584fdc3c695da3d4684fbe2bdd1ae4e1da13/merged major:0 minor:383 fsType:overlay blockSize:0} overlay_0-386:{mountpoint:/var/lib/containers/storage/overlay/ac0c9d70c95c45e8267321d2acd868f2162738db5fd841882e0c47947890758c/merged major:0 minor:386 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/76657b2fa58acf9156243c80157d30744d13660434aab90fcb9e28b5b37c301c/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-393:{mountpoint:/var/lib/containers/storage/overlay/42bbbcbe670ced248e551deffcfe2ccbe890cfe8f46ed9f1b753bb4d9401e2e2/merged major:0 minor:393 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/cc0756f0e059b4f77ff76f97c63afb6ff1ec613f4ba21da28e48d332d8bf62a5/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/6af1fbd8298dd7c3fc51640fa336ed185edaea0c883fcc8ffef25f121d178333/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/004739ef11a5384b2cf40b8f77208895569b3b6039e2e8924f4a143ce1290b6b/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-417:{mountpoint:/var/lib/containers/storage/overlay/56eeeff1a1c3bb39fd1783e19871e414af2235a6ef3df72a1cd09c1856f2afd2/merged major:0 minor:417 fsType:overlay blockSize:0} overlay_0-419:{mountpoint:/var/lib/containers/storage/overlay/644d328a15be4f9148583e2ac4c4f59468955f7309ad9d79248a6a5343097553/merged major:0 minor:419 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/e415211956a3a7dd86aaaac7ecc0cab79b862fa618aea6dacc72cc63263237d5/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-429:{mountpoint:/var/lib/containers/storage/overlay/411523577c9b5fe2f4d7fa80c7aaf4ade9e6794f684454f9f3b5e7dea686e23a/merged major:0 minor:429 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/9ee90cdc8d23ee4f65522455c1bd3a01c50cb1333db2566797b589e95624ddd0/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-437:{mountpoint:/var/lib/containers/storage/overlay/d87012fb0874713cc8a1e33a32628dcfbb27c31a77ecd363cb5a0b002ca5f494/merged major:0 minor:437 fsType:overlay blockSize:0} overlay_0-438:{mountpoint:/var/lib/containers/storage/overlay/7d397cf455080e5fb81e8007ee6faecfe87f639c44db94154609929fd1e97c19/merged major:0 minor:438 fsType:overlay blockSize:0} overlay_0-440:{mountpoint:/var/lib/containers/storage/overlay/00301c26d10a7a2dcfeca252345f59d7092a61eddd4b5dbb31e28d4a39e71ec7/merged major:0 minor:440 fsType:overlay blockSize:0} overlay_0-454:{mountpoint:/var/lib/containers/storage/overlay/7b7331ba9a934afd80fe5d6759bb47ba57056a9cad406baf4b59e1a57809e6d4/merged major:0 minor:454 fsType:overlay blockSize:0} overlay_0-457:{mountpoint:/var/lib/containers/storage/overlay/4356fa7a5794f8a5951cff37f0c11588ffc3a000b85c90f31fb8ed101f209ff5/merged major:0 minor:457 fsType:overlay blockSize:0} overlay_0-458:{mountpoint:/var/lib/containers/storage/overlay/c8dfe98afb3f6b6458124242579970dfac03e78f757d6a2c3477ef56c77212a1/merged major:0 minor:458 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/cd133f9c449c766b3bb6200179486ae5158558787c9635afe5d05829b8a76783/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-465:{mountpoint:/var/lib/containers/storage/overlay/efcd09dd1e2a71e7d2efa38ffbb665f46f1d405439505fd2d0f06e4173bcb107/merged major:0 minor:465 fsType:overlay blockSize:0} overlay_0-474:{mountpoint:/var/lib/containers/storage/overlay/79cd5d86bc10a2ff7d6e9fb9f529ca059030fe2c657bffdbdddbde3f06055de4/merged major:0 minor:474 fsType:overlay blockSize:0} overlay_0-488:{mountpoint:/var/lib/containers/storage/overlay/615be13a682bb4feb7c8bf6b0f6a81e80c550b02c1dacfd83c356e1b3fe00f74/merged major:0 minor:488 fsType:overlay blockSize:0} overlay_0-490:{mountpoint:/var/lib/containers/storage/overlay/6824ea8c51b42a9331186012e386299ff7f2a23af2297594dc44e5c0e27cf317/merged major:0 minor:490 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/86344d82a337c101716addb1254f81076f90b1e82f860127e1cadfe0d25d8894/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-507:{mountpoint:/var/lib/containers/storage/overlay/c1e9fbc83d44718d223e8be04d68ecc3869ea9951540727270d51a97bfe1b02b/merged major:0 minor:507 fsType:overlay blockSize:0} overlay_0-51:{mountpoint:/var/lib/containers/storage/overlay/2632a378e3956c607bc19316f75dabeabe7a9e16f3a9f2b8ca23c99e76db6ba3/merged major:0 minor:51 fsType:overlay blockSize:0} overlay_0-510:{mountpoint:/var/lib/containers/storage/overlay/2a3755526fb4bc9fb3877cc07bfcfe786f750abaf6fc7d6d2d728e239c1d000a/merged major:0 minor:510 fsType:overlay blockSize:0} overlay_0-512:{mountpoint:/var/lib/containers/storage/overlay/1952373b189e698af1317ed85afd7af47eb353b3a0869079a30baf6ccedb687a/merged major:0 minor:512 fsType:overlay blockSize:0} overlay_0-514:{mountpoint:/var/lib/containers/storage/overlay/9d7624b0517e374eb18fdd5fb27e80ad6101d4c82cb387ab369ca3aba0cb7ee4/merged major:0 minor:514 fsType:overlay blockSize:0} overlay_0-516:{mountpoint:/var/lib/containers/storage/overlay/b42ef9f6e00465f6643643c0a56d29612f1ba638c65defe0ab1d6c71a1051391/merged major:0 minor:516 fsType:overlay blockSize:0} overlay_0-517:{mountpoint:/var/lib/containers/storage/overlay/6ee624093b4a9a0324fe7e6db23a4f276ee74c32e2b435fb7d9d6b667f25f76e/merged major:0 minor:517 fsType:overlay blockSize:0} overlay_0-523:{mountpoint:/var/lib/containers/storage/overlay/608f2370f25addf8d127c5f6b66606a71f0bced5d0580983232d169f2e1028dd/merged major:0 minor:523 fsType:overlay blockSize:0} overlay_0-525:{mountpoint:/var/lib/containers/storage/overlay/664ab025c003b8cf82f3e6a96fec73cca9a1faef343b37af1285881a029a0b42/merged major:0 minor:525 fsType:overlay blockSize:0} overlay_0-53:{mountpoint:/var/lib/containers/storage/overlay/2a51ad64b5bff2603b0005917422bea28ca4a2c90168c44827c5080c0b2a1f68/merged major:0 minor:53 fsType:overlay blockSize:0} overlay_0-536:{mountpoint:/var/lib/containers/storage/overlay/4827bab409e3cf504236220db6bc414097bd7a40d78a7c80efac379c8e47537c/merged major:0 minor:536 fsType:overlay blockSize:0} overlay_0-555:{mountpoint:/var/lib/containers/storage/overlay/01495ad770ac59d87734926a7ca79313e12520de7e36191d4159f673a88e7da4/merged major:0 minor:555 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/02030c2324b999e08ce5066d2b6ac7623d7b666f22b63c9a4d4e0bb71cfb9b65/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-580:{mountpoint:/var/lib/containers/storage/overlay/088508f5da02bf1b61ace0e7b22401aaf48885700ebbd890f853b8662da67de9/merged major:0 minor:580 fsType:overlay blockSize:0} overlay_0-584:{mountpoint:/var/lib/containers/storage/overlay/f57e919a6f6a080aa63f42af8f96580a21cf6591c405e8f87b435d64deab11f1/merged major:0 minor:584 fsType:overlay blockSize:0} overlay_0-585:{mountpoint:/var/lib/containers/storage/overlay/23c9532e42001b85b9ccfed2f66e5b584c2df14f330e3951bc9c72bc92cc2ae1/merged major:0 minor:585 fsType:overlay blockSize:0} overlay_0-587:{mountpoint:/var/lib/containers/storage/overlay/bd49d9bcd18790e191b575a555373651ca7dd5f08ab16fc1b08e53d860a1f1f1/merged major:0 minor:587 fsType:overlay blockSize:0} overlay_0-589:{mountpoint:/var/lib/containers/storage/overlay/8d45e3160b4663ab45e03526d543902d28b74c93ddde5c2f6b638ee55ec118d5/merged major:0 minor:589 fsType:overlay blockSize:0} overlay_0-59:{mountpoint:/var/lib/containers/storage/overlay/799820eb1d50ed9434852121d9db6fc8ddaa8784acb5901609544f34eec10cfa/merged major:0 minor:59 fsType:overlay blockSize:0} overlay_0-591:{mountpoint:/var/lib/containers/storage/overlay/03cc96521ccaeb814a3c0ad7bd347155c445eebcfce8415df0cbd9bbe98ace14/merged major:0 minor:591 fsType:overlay blockSize:0} overlay_0-593:{mountpoint:/var/lib/containers/storage/overlay/4ee3f0280e3376f8219e7f332bfff577a2731aa072b72acdb3e95b2aa78de25d/merged major:0 minor:593 fsType:overlay blockSize:0} overlay_0-595:{mountpoint:/var/lib/containers/storage/overlay/82334de60ef0690981b2594e97e1ed0aef0dfa38090541905751fd516140c244/merged major:0 minor:595 fsType:overlay blockSize:0} overlay_0-597:{mountpoint:/var/lib/containers/storage/overlay/ce5bdf5643104c0e69c3f170abdabf758c897bddee04913ac78999ca1f4052b1/merged major:0 minor:597 fsType:overlay blockSize:0} overlay_0-601:{mountpoint:/var/lib/containers/storage/overlay/235b14cd002d1d3176609ee318eab89d457ee9d2c98c5b886e7cd8077cb2691b/merged major:0 minor:601 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/05d15f56045acd7ff2e9f81734818a131edce2cfd25daad6b6f23522506883b7/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-61:{mountpoint:/var/lib/containers/storage/overlay/f99bba9d907c35743a1cb9c19a1c0b4e5440d9b824e73f55c15973366ca61f6b/merged major:0 minor:61 fsType:overlay blockSize:0} overlay_0-620:{mountpoint:/var/lib/containers/storage/overlay/48763af0c07a6f4f62a0cca5394025933f38517ac43d2e672a7bfab38500bf0b/merged major:0 minor:620 fsType:overlay blockSize:0} overlay_0-622:{mountpoint:/var/lib/containers/storage/overlay/4180d558f7f93769d16b68a2ea2975cacd2c70e891981fb6f7216f0f6ecc3c54/merged major:0 minor:622 fsType:overlay blockSize:0} overlay_0-635:{mountpoint:/var/lib/containers/storage/overlay/305f31c2e423d34702adadfc5cfc87d0e8b4465d3345f96a5760d5dffd221dea/merged major:0 minor:635 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/3151e4552c4047ae94d9ad6974ee2980fa644b0b91992a27c841e2085a356608/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-647:{mountpoint:/var/lib/containers/storage/overlay/0e5f2b069ab3fb4ba3126ade9cddb43346a1b241089c1f42a05268b64fcf3717/merged major:0 minor:647 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/587a4a68d865da2bf66d64c112d59daefca45b15de9f0003c49e6c7c45624d84/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-650:{mountpoint:/var/lib/containers/storage/overlay/79b135ad177e77b489d53bebab668e52f5de0a9903792c30b5d69c6279d16a7b/merged major:0 minor:650 fsType:overlay blockSize:0} overlay_0-656:{mountpoint:/var/lib/containers/storage/overlay/9010c68f7aa26ac2f44d5a98c5aa936651653a3fb4afab6d3f598ddc677c7365/merged major:0 minor:656 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/d2ef9a4c56d305d8697c24203b21ff2969406ec10e797dc3dd52c911c375454a/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-677:{mountpoint:/var/lib/containers/storage/overlay/56111d892be1b7a3c4afe8f989b2a09bcb3d2573316ef7484c0f0757320d72b8/merged major:0 minor:677 fsType:overlay blockSize:0} overlay_0-679:{mountpoint:/var/lib/containers/storage/overlay/6941852e9fa1a56d677cf3b6b47394a7a93636b9760a43418343d709885d21b5/merged major:0 minor:679 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/ab8357b2ccccb0e668e5f805f40a53aaab2e34c66ccfe5836c2d1cd6b4d04bc6/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/7f926fc91a764d27a82a3f109134cc36713ff7bfdf4cae788c99af76db267b05/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-715:{mountpoint:/var/lib/containers/storage/overlay/432d4c75652f9b40c48a189570c9bca2035b81ee93e4296a944b733798f17607/merged major:0 minor:715 fsType:overlay blockSize:0} overlay_0-722:{mountpoint:/var/lib/containers/storage/overlay/4f4c5e86db042d82269c0585a06e4287d9d2d011c0a48a5c8a29d0f0e5a5c2fc/merged major:0 minor:722 fsType:overlay blockSize:0} overlay_0-723:{mountpoint:/var/lib/containers/storage/overlay/abe568896f8877bf888660be4310a89c608774a8856ac0fc456b233372923e11/merged major:0 minor:723 fsType:overlay blockSize:0} overlay_0-737:{mountpoint:/var/lib/containers/storage/overlay/bc09568185fe4be61992633c0db2ff639a65593c39010fe1579306f0d82f3818/merged major:0 minor:737 fsType:overlay blockSize:0} overlay_0-739:{mountpoint:/var/lib/containers/storage/overlay/1f2b929db244bf16f0a68206d5dcef2b54481614b8fd0c155b53af067fb9d2ee/merged major:0 minor:739 fsType:overlay blockSize:0} overlay_0-750:{mountpoint:/var/lib/containers/storage/overlay/771e32f136325f3814ad33113120208036e6865c80f8de8090b283675e592e01/merged major:0 minor:750 fsType:overlay blockSize:0} overlay_0-761:{mountpoint:/var/lib/containers/storage/overlay/e71cb2c9f53f52204c1a046f613e738434a300b50222afca5cb1eb6043f4b009/merged major:0 minor:761 fsType:overlay blockSize:0} overlay_0-763:{mountpoint:/var/lib/containers/storage/overlay/b29c8ce90f7a5487af059013402e565181e81b6468240781006f69ddaafba430/merged major:0 minor:763 fsType:overlay blockSize:0} overlay_0-766:{mountpoint:/var/lib/containers/storage/overlay/ace5ad9047bd190c3f7a5e4503f2889cb495b6d2b4d4d25e15a394adee094793/merged major:0 minor:766 fsType:overlay blockSize:0} overlay_0-782:{mountpoint:/var/lib/containers/storage/overlay/046c42b396660cb629fc55db17169672ff99d28c9416197b23300a243d437000/merged major:0 minor:782 fsType:overlay blockSize:0} overlay_0-786:{mountpoint:/var/lib/containers/storage/overlay/8d32413216d66b6f0b281d2ce4b8aeed4feeb4486715cc3bdfe2fda5fc20e76f/merged major:0 minor:786 fsType:overlay blockSize:0} overlay_0-787:{mountpoint:/var/lib/containers/storage/overlay/5b562cdda5a3623af516ea1894d52f4ee056caa0e353385a900c298fbd78a4bb/merged major:0 minor:787 fsType:overlay blockSize:0} overlay_0-789:{mountpoint:/var/lib/containers/storage/overlay/716226046f7d2236ae80430ed1e5328be3abebee2c19f75b314a74cdb5255d0b/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-796:{mountpoint:/var/lib/containers/storage/overlay/2bb14c1d30d08a23a23afdd2f56d24a552e60260f528a166868b761f024b0561/merged major:0 minor:796 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/contain Mar 18 09:04:07.101520 master-0 kubenswrapper[28766]: ers/storage/overlay/fd5af97280c919f860a0dc1202f1f4cdf707993ef49f449692e245322e38bf6b/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-803:{mountpoint:/var/lib/containers/storage/overlay/9b926f7b5347862540b9d8f4a5b0f00b152c555c1253cc24dd59f3da0121e341/merged major:0 minor:803 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/f74f725af480005cd4743a2c9f7ea3cd518838568a28e864c830ad79c44074fb/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-820:{mountpoint:/var/lib/containers/storage/overlay/0349407952a12d54b69ba7997ddf210eb8b9089006e6a2a218db00d855eb73c0/merged major:0 minor:820 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/eb54917036e4bd11a8d0dac32eb52e415798ca75a579ccd24900ab7e2ccda372/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-840:{mountpoint:/var/lib/containers/storage/overlay/20f76636553263da0a1f97565c79904969d6a5dd98fde9cf528387abffea3768/merged major:0 minor:840 fsType:overlay blockSize:0} overlay_0-842:{mountpoint:/var/lib/containers/storage/overlay/b5149d5a29ffd0cf3d97a7fe3379bf173e21da0f0b27ec62de19df9d0f07d4a9/merged major:0 minor:842 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/08a43e55480a50f6dfebe37ae7a76ad23ca1864ced5b62f42b8d5cd0c52b48b5/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/08b24c3c3a7896e4e54e98e672bc4666717ceaa9e4a740ea00f7cc475a035d0c/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-853:{mountpoint:/var/lib/containers/storage/overlay/adea0f2c91a556191cc1d8f14598fd14acf664af8d1c101a2e1c709a55f54a34/merged major:0 minor:853 fsType:overlay blockSize:0} overlay_0-855:{mountpoint:/var/lib/containers/storage/overlay/6a1954644a7b2aaec5745ce346c355360d2874bcba66a1fbde2acb0ebd18c7a3/merged major:0 minor:855 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/d6ac8545b7b9e2f83e2982541e6857b8729acda3b1ffe7918f2625bb87404d05/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/ec95061085ae7a4111d6ff5ff6dcb522a95786e401586c20acd1f1f9bbf5b9aa/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-868:{mountpoint:/var/lib/containers/storage/overlay/a82dafd2159c6d595249ebba68092886f6d34090916779d106547161d9ddcf0a/merged major:0 minor:868 fsType:overlay blockSize:0} overlay_0-872:{mountpoint:/var/lib/containers/storage/overlay/99e655ae405c4107308d31bd4b1f7aeb3aadc8f4d25afcd578e7a430ae92f74b/merged major:0 minor:872 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/0bde92d4804b89ac4870b0e8c1b1640ce34814212a6638473d8e9e218748074f/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/0646b0e6e65a38fed15b2acf3d753d19339aa962ebd679c505e7451e7d33b749/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-882:{mountpoint:/var/lib/containers/storage/overlay/07b8dff5b1787bf3eb336d1d458c1bc2d4276632917bff1fe76b3b3426f993c0/merged major:0 minor:882 fsType:overlay blockSize:0} overlay_0-888:{mountpoint:/var/lib/containers/storage/overlay/492ba935c4fc2a9ed91f5bac1856aede27c4ba3a405bb061c4b9e34ce198c2d8/merged major:0 minor:888 fsType:overlay blockSize:0} overlay_0-894:{mountpoint:/var/lib/containers/storage/overlay/b58a8dcb5e7c5a51d47705f824cc5f2a77cac1b188ec1b7140d1bac8977b1e89/merged major:0 minor:894 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/a41d124240e70d0f7b1efba1160857184a6c9a8d70c0ec0eb5fb3841ec1cb7c5/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-903:{mountpoint:/var/lib/containers/storage/overlay/4accdc23bf486867208d4efa2527361c951aa16d70196308ae90bad9c811fa5f/merged major:0 minor:903 fsType:overlay blockSize:0} overlay_0-905:{mountpoint:/var/lib/containers/storage/overlay/a87b48147732e0bcbc8b30703db6b43dbc28d8c296b50bea49e67e24c45c2d4a/merged major:0 minor:905 fsType:overlay blockSize:0} overlay_0-907:{mountpoint:/var/lib/containers/storage/overlay/f7251170a63653633f7d0f9be2e8741bd131a6e28a9fdb0b7937a90d48d9859f/merged major:0 minor:907 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/37f026786f01426b74b0112fe27253536fb075d4258bef4f7f15ac29ce438e71/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-921:{mountpoint:/var/lib/containers/storage/overlay/5b8f7cea801c397631872aede4c89683b38ad4112860f4e23b46b00f5bb6b1fc/merged major:0 minor:921 fsType:overlay blockSize:0} overlay_0-931:{mountpoint:/var/lib/containers/storage/overlay/cca1a1e0b3472d801026437cbfd4d8af7d0f2e92bb2d11d695e45751ea19f9d2/merged major:0 minor:931 fsType:overlay blockSize:0} overlay_0-932:{mountpoint:/var/lib/containers/storage/overlay/b81b3e214b3a7e57dc7f66a55e3b1edcad8faa1ce49baea66dfc5e17c4a147fe/merged major:0 minor:932 fsType:overlay blockSize:0} overlay_0-941:{mountpoint:/var/lib/containers/storage/overlay/8985318875aa36f1d2b268729bd4996126968c978e139d0a13e04764d308ad23/merged major:0 minor:941 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/38045010ba869db74f7b22fa80e60a6b9a1b4b9c5394eaf30153ee9be9acac21/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-962:{mountpoint:/var/lib/containers/storage/overlay/9db78676e706377af83f9a9010f9690eda92453afb126f2cf44953cb61708455/merged major:0 minor:962 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/0eb9b7a0f98b90207581b5974ed8365b4eb64cef10a44869636528332c6f2ede/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/72e8539b2373a20ef350cdc956ac2190fe1ddab0c22c668832a5ef2b5e49ad4e/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-990:{mountpoint:/var/lib/containers/storage/overlay/70b36fbb8e866318104e34e1639b75184c549c57a0580f552f76532cefdda267/merged major:0 minor:990 fsType:overlay blockSize:0} overlay_0-995:{mountpoint:/var/lib/containers/storage/overlay/46bbbb24716ef9b034335a8de055c394f6e11d4304c98339f0980a26700bface/merged major:0 minor:995 fsType:overlay blockSize:0} overlay_0-997:{mountpoint:/var/lib/containers/storage/overlay/6c3e5d89736f4920c78164a6d4d49cb16aebc887b30001804068dc459a86d0f9/merged major:0 minor:997 fsType:overlay blockSize:0} overlay_0-999:{mountpoint:/var/lib/containers/storage/overlay/0ec5aceb393647283d978260c69034b19ebf4608a98222a62ed72663921c4f94/merged major:0 minor:999 fsType:overlay blockSize:0}] Mar 18 09:04:07.155945 master-0 kubenswrapper[28766]: I0318 09:04:07.154169 28766 manager.go:217] Machine: {Timestamp:2026-03-18 09:04:07.152869574 +0000 UTC m=+0.167128280 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:462ae4bbdf8a4211a5b04e094f4702bb SystemUUID:462ae4bb-df8a-4211-a5b0-4e094f4702bb BootID:8f184f3d-61e6-4234-a551-2580e849051e Filesystems:[{Device:/var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~projected/kube-api-access-dwrdc DeviceMajor:0 DeviceMinor:255 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~projected/kube-api-access-x9w7l DeviceMajor:0 DeviceMinor:137 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-650 DeviceMajor:0 DeviceMinor:650 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ffc5379c-651f-490c-90f4-1285b9093596/volumes/kubernetes.io~projected/kube-api-access-4vfrs DeviceMajor:0 DeviceMinor:832 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/32faaf71e97855a1cb6aa3bd19d52c689531407fd638810606403df329a94675/userdata/shm DeviceMajor:0 DeviceMinor:91 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-393 DeviceMajor:0 DeviceMinor:393 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~projected/kube-api-access-5ngk7 DeviceMajor:0 DeviceMinor:103 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/26ecaeebed65d3cea64cdc63150668e13ecd2fef68a18e11955a52673f9e9975/userdata/shm DeviceMajor:0 DeviceMinor:504 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-437 DeviceMajor:0 DeviceMinor:437 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-386 DeviceMajor:0 DeviceMinor:386 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~projected/kube-api-access-gjq4w DeviceMajor:0 DeviceMinor:774 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1133 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/336e741d-ac9a-4b94-9fbb-c9010e37c2d0/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:977 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:476 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-53 DeviceMajor:0 DeviceMinor:53 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1153 DeviceMajor:0 DeviceMinor:1153 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:801 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-722 DeviceMajor:0 DeviceMinor:722 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ffc5379c-651f-490c-90f4-1285b9093596/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:830 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-507 DeviceMajor:0 DeviceMinor:507 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3c4e15b0e2e376b6219a5a7e0e6e767c17e2686b088653fbb672e0c430635638/userdata/shm DeviceMajor:0 DeviceMinor:1016 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1206 DeviceMajor:0 DeviceMinor:1206 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~projected/kube-api-access-vfjgn DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-59 DeviceMajor:0 DeviceMinor:59 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:496 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/18921497-d8ed-42d8-bf3c-a027566ebe85/volumes/kubernetes.io~projected/kube-api-access-vtz82 DeviceMajor:0 DeviceMinor:45 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1094 DeviceMajor:0 DeviceMinor:1094 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-997 DeviceMajor:0 DeviceMinor:997 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-894 DeviceMajor:0 DeviceMinor:894 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~projected/kube-api-access-glt6c DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c28524ce9ebb8a89b175cc98bd1b1e9d4101033acc5d2f2a96632789a23b70d2/userdata/shm DeviceMajor:0 DeviceMinor:557 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-510 DeviceMajor:0 DeviceMinor:510 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-516 DeviceMajor:0 DeviceMinor:516 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f9fa104a-4979-4023-8d7e-a965f11bc7db/volumes/kubernetes.io~projected/kube-api-access-jlwg9 DeviceMajor:0 DeviceMinor:115 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b/volumes/kubernetes.io~projected/kube-api-access-jnspk DeviceMajor:0 DeviceMinor:833 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1169 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1212 DeviceMajor:0 DeviceMinor:1212 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:551 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-458 DeviceMajor:0 DeviceMinor:458 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d1bca7add53921531b3272a47166466f7d2ed78f903322c5f6c45062071f9671/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/01fc205ca60889e86b938272f49efc7613d39ee0f345e6249d36f7dbe33a148e/userdata/shm DeviceMajor:0 DeviceMinor:486 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~projected/kube-api-access-8lsw9 DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:427 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-584 DeviceMajor:0 DeviceMinor:584 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-763 DeviceMajor:0 DeviceMinor:763 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1217 DeviceMajor:0 DeviceMinor:1217 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/00b7669c60621e059b9f2a3185ba93db56934e35fa8fa0713c09f3decdea9378/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c62bfe26cbaa5afe7741b2ad05574cf96716a998721d303299c76986059ad0d0/userdata/shm DeviceMajor:0 DeviceMinor:843 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1037 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~projected/kube-api-access-s8prf DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-517 DeviceMajor:0 DeviceMinor:517 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-990 DeviceMajor:0 DeviceMinor:990 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/263fd4cd6308173314717fc603c0f2464a1db66cd143ea0b303b9d029c2bd481/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-597 DeviceMajor:0 DeviceMinor:597 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1102 DeviceMajor:0 DeviceMinor:1102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-457 DeviceMajor:0 DeviceMinor:457 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~projected/kube-api-access-zj9rk DeviceMajor:0 DeviceMinor:726 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~projected/kube-api-access-ltlf6 DeviceMajor:0 DeviceMinor:1078 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:448 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35bb7224fe9eca618f0100241589daaf5b90ad54413934d086e067f2a229eae2/userdata/shm DeviceMajor:0 DeviceMinor:758 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a268d595-18c2-43a2-8ed5-eb64c76c490f/volumes/kubernetes.io~projected/kube-api-access-hfzdp DeviceMajor:0 DeviceMinor:760 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-383 DeviceMajor:0 DeviceMinor:383 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-371 DeviceMajor:0 DeviceMinor:371 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:549 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c6f3ba629d26f9cdeb3d7860a7b0f64e21de0f0dc77a559ebfda83ee3654ece0/userdata/shm DeviceMajor:0 DeviceMinor:1014 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:553 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-587 DeviceMajor:0 DeviceMinor:587 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-585 DeviceMajor:0 DeviceMinor:585 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1052 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~projected/kube-api-access-9q8l2 DeviceMajor:0 DeviceMinor:1139 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1042 DeviceMajor:0 DeviceMinor:1042 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/volumes/kubernetes.io~projected/kube-api-access-czm78 DeviceMajor:0 DeviceMinor:731 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/88d505327814e64c05d565f5816ae97892418500facf7fd5799add8d17c8b232/userdata/shm DeviceMajor:0 DeviceMinor:306 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5a2943917dc38b0012b7ecf0b0d92cb0eaf6fda9f9ba0f60f4167aa1dddca628/userdata/shm DeviceMajor:0 DeviceMinor:353 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1079 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~projected/kube-api-access-rpxfc DeviceMajor:0 DeviceMinor:1076 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~projected/kube-api-access-cj9fr DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-761 DeviceMajor:0 DeviceMinor:761 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95171c03fc7a28cf1acc6d32a99defa7481a42e7b61b5f5262deb3933da18ccc/userdata/shm DeviceMajor:0 DeviceMinor:409 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1048 DeviceMajor:0 DeviceMinor:1048 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/volumes/kubernetes.io~projected/kube-api-access-fbsgx DeviceMajor:0 DeviceMinor:480 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1005 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1027 DeviceMajor:0 DeviceMinor:1027 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2ade7e6-cecd-4e98-8f85-ea8219303d75/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-750 DeviceMajor:0 DeviceMinor:750 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f650e6f0-fb74-4083-a7a9-fa4df513108f/volumes/kubernetes.io~projected/kube-api-access-tsc6v DeviceMajor:0 DeviceMinor:1013 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ea87280c188a798da95cc9ce18e125174ff632d343ee3e8d6a214207d7770e1e/userdata/shm DeviceMajor:0 DeviceMinor:572 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ccf74af5-d4fd-4ed3-9784-42397ea798c5/volumes/kubernetes.io~projected/kube-api-access-p9qkd DeviceMajor:0 DeviceMinor:467 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~projected/kube-api-access-8w58l DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:252 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:497 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/336e741d-ac9a-4b94-9fbb-c9010e37c2d0/volumes/kubernetes.io~projected/kube-api-access-hbsfs DeviceMajor:0 DeviceMinor:992 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1144 DeviceMajor:0 DeviceMinor:1144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e0bb044f-5a4e-4981-8084-91348ce1a56a/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1188 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e229ef6f57fea8e5406ee6259b2efa0f8a16c288c8a29c71c1e32c057bf84d0/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~projected/kube-api-access-dqldd DeviceMajor:0 DeviceMinor:485 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/92542f7c-182b-45a8-bbf3-00e99ba7acee/volumes/kubernetes.io~projected/kube-api-access-4lv7n DeviceMajor:0 DeviceMinor:747 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fa8f1797-0219-49fe-82b5-7416cc481c3a/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:404 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-465 DeviceMajor:0 DeviceMinor:465 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-512 DeviceMajor:0 DeviceMinor:512 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d9f6591fd179f080128bbdecaa328db0f824489c21d34724dd9ae09d41418d2c/userdata/shm DeviceMajor:0 DeviceMinor:568 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:728 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b35ab145-16a7-4ef1-86e8-0afb6ff469fd/volumes/kubernetes.io~projected/kube-api-access-tp77s DeviceMajor:0 DeviceMinor:663 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e64ea71a-1e89-409a-9607-4d3cea093643/volumes/kubernetes.io~projected/kube-api-access-b689k DeviceMajor:0 DeviceMinor:456 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e06ef30b0d712353cac23adca2af0b5ab657ead19ee838202a1a4e15b1021cb/userdata/shm DeviceMajor:0 DeviceMinor:116 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16d633c5-e0aa-4fb6-83e0-a2e976334406/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:136 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27e819688a289fa256559a318b6523e53569525673491824d2f15c32bbc44e17/userdata/shm DeviceMajor:0 DeviceMinor:823 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/08c69ca72893cd876b16b5740d0ac91db39852d0fe47a473761270d55d7436d0/userdata/shm DeviceMajor:0 DeviceMinor:1140 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/23865ef5bfea471643359580ecae55517bf670fdb3b8b05c871c139fe34b55d5/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-872 DeviceMajor:0 DeviceMinor:872 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e0bb044f-5a4e-4981-8084-91348ce1a56a/volumes/kubernetes.io~projected/kube-api-access-ks4jl DeviceMajor:0 DeviceMinor:1193 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~projected/kube-api-access-x6zq8 DeviceMajor:0 DeviceMinor:120 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e025d334-20e7-491f-8027-194251398747/volumes/kubernetes.io~projected/kube-api-access-bfzdk DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~projected/kube-api-access-2msp8 DeviceMajor:0 DeviceMinor:253 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/43fbd379-dd1e-4287-bd76-fd3ec51cde43/volumes/kubernetes.io~projected/kube-api-access-c52pj DeviceMajor:0 DeviceMinor:472 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1074 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-440 DeviceMajor:0 DeviceMinor:440 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1020 DeviceMajor:0 DeviceMinor:1020 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-962 DeviceMajor:0 DeviceMinor:962 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1142 DeviceMajor:0 DeviceMinor:1142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/17e72118bc9a21caf0710ea436fca2a94e237b39c26fb49832cf7ed5fa2efe7d/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1085 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a5f412f714f8914221964a888babc262e21046db3f1580b324543c6c04c3fbd9/userdata/shm DeviceMajor:0 DeviceMinor:1080 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e64ea71a-1e89-409a-9607-4d3cea093643/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:453 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1036 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~projected/kube-api-access-4r7hx DeviceMajor:0 DeviceMinor:1077 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2207df9e-f21e-4c30-98d5-248ae99c245e/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a7dab805-612b-404c-ab97-8cee927169db/volumes/kubernetes.io~projected/kube-api-access-pjrfz DeviceMajor:0 DeviceMinor:920 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3e96b35f-c57a-4e01-82f7-894ea16ac5b8/volumes/kubernetes.io~projected/kube-api-access-rgs9m DeviceMajor:0 DeviceMinor:1045 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/55b41391fdb5cf271845bf26cd3e0f895b338fd5cf036e303350901534473728/userdata/shm DeviceMajor:0 DeviceMinor:569 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4cc1a3bde7a78af95462a4b4f6ce986942ed4140ae91386507e1857084f8fcea/userdata/shm DeviceMajor:0 DeviceMinor:866 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/477f7fc213175cb954b186d8ae344e645aa5b57eb7978240c62ca1b2bcc281be/userdata/shm DeviceMajor:0 DeviceMinor:1018 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1179 DeviceMajor:0 DeviceMinor:1179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-419 DeviceMajor:0 DeviceMinor:419 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d0272f7c-bedc-44cf-9790-88e10e6dda03/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:329 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/78bf827b88ee656669c068d855b66ac1c4ec3fa61f0cd2ad36e3510f8a53aa74/userdata/shm DeviceMajor:0 DeviceMinor:65 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-488 DeviceMajor:0 DeviceMinor:488 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-589 DeviceMajor:0 DeviceMinor:589 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb93ae4071b146962466e96a3daecbc8c529d6e1a15ad1edfa1a28da5c544561/userdata/shm DeviceMajor:0 DeviceMinor:562 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-787 DeviceMajor:0 DeviceMinor:787 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f65344cd-8571-4a78-927f-eec46ec1af51/volumes/kubernetes.io~projected/kube-api-access-djq7n DeviceMajor:0 DeviceMinor:754 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-591 DeviceMajor:0 DeviceMinor:591 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8d89af2f-47f5-4ee5-a790-e162c2dee3ce/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:625 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~projected/kube-api-access-6bzxp DeviceMajor:0 DeviceMinor:373 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91a6fa86-8c58-43bc-a2d4-2b20901269f7/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1084 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-167 DeviceMajor:0 DeviceMinor:167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1196 DeviceMajor:0 DeviceMinor:1196 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-782 DeviceMajor:0 DeviceMinor:782 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-995 DeviceMajor:0 DeviceMinor:995 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1024 DeviceMajor:0 DeviceMinor:1024 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/97730ec2-e6f1-4f8c-b85c-3c10623d06ce/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:460 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/31a92270-efed-44fe-871e-90333235e85f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:816 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7b6fb81fa9b3775db2a9d43b8034ee4a9a2939e8e74ced3195abe4a7116a137d/userdata/shm DeviceMajor:0 DeviceMinor:451 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f214df22b3108e2647e81c2065b29247bcd16b9d799cc094aa75352fed33b39/userdata/shm DeviceMajor:0 DeviceMinor:561 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a058ca3e613163c208806f2f85e86778b10da29eadc77daac9aef1471afdc643/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-555 DeviceMajor:0 DeviceMinor:555 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-338 DeviceMajor:0 DeviceMinor:338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-855 DeviceMajor:0 DeviceMinor:855 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7375d00faec570babb78f641885c44d45133bd27ded2430ca3ed60792534d150/userdata/shm DeviceMajor:0 DeviceMinor:765 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0abbacca379cb1aa4703d3e53f8d0cf0d9cc8837c199cd99507dcb84dbe142a8/userdata/shm DeviceMajor:0 DeviceMinor:1088 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1127 DeviceMajor:0 DeviceMinor:1127 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9edfccecec2ce83d19d6f04be10c237136ad19be78d3969b003d45d0dd5cdd53/userdata/shm DeviceMajor:0 DeviceMinor:633 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0cdcdcd2ccccdebd6503233827667ed7ce6f4654db0dc10c48bcf238245e2d46/userdata/shm DeviceMajor:0 DeviceMinor:733 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-647 DeviceMajor:0 DeviceMinor:647 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a7dab805-612b-404c-ab97-8cee927169db/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:912 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/52447280dead3b5a28af890c9c1936e68858aa0be2da0967ec252697841e8f7d/userdata/shm DeviceMajor:0 DeviceMinor:1086 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f2c2ecd78b0b095cca6d610f53e1ff83eedc17b6a054e2d1a3484b11ec8181f6/userdata/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~projected/kube-api-access-mlp7w DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/837527d2f9f7319ea14fc20367ef17853e00cc20e938fc1184f891aa57296deb/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-999 DeviceMajor:0 DeviceMinor:999 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/156dd659cded87fed4f4d9c1948aa273d3ce5df8a947527d51220517f67ececc/userdata/shm DeviceMajor:0 DeviceMinor:740 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-803 DeviceMajor:0 DeviceMinor:803 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:812 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d9881841018d229060672bdf33946e413258966dde9be04451521b3c0265667/userdata/shm DeviceMajor:0 DeviceMinor:886 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~projected/kube-api-access-4hn9w DeviceMajor:0 DeviceMinor:245 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-737 DeviceMajor:0 DeviceMinor:737 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1090 DeviceMajor:0 DeviceMinor:1090 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fef2da050284c5b28c67d998136cd7aca2118deb05e66bc5e9cea3da325d47dc/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:484 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-715 DeviceMajor:0 DeviceMinor:715 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b0280499-8277-46f0-bd8c-058a47a99e19/volumes/kubernetes.io~projected/kube-api-access-dxvk7 DeviceMajor:0 DeviceMinor:262 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-523 DeviceMajor:0 DeviceMinor:523 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31a92270-efed-44fe-871e-90333235e85f/volumes/kubernetes.io~projected/kube-api-access-8zhfh DeviceMajor:0 DeviceMinor:838 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-474 DeviceMajor:0 DeviceMinor:474 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-51 DeviceMajor:0 DeviceMinor:51 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ccf74af5-d4fd-4ed3-9784-42397ea798c5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:463 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:431 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-536 DeviceMajor:0 DeviceMinor:536 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-931 DeviceMajor:0 DeviceMinor:931 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:432 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ea77244427e21f197396c97f841977fffdf6891b18e6c927b783ae59d8efff47/userdata/shm DeviceMajor:0 DeviceMinor:1058 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1173 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4/volumes/kubernetes.io~projected/kube-api-access-hpl2c DeviceMajor:0 DeviceMinor:102 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2c337c8902968583bee083c15c603882d48753850a36d0d861e8e0df75e9ad06/userdata/shm DeviceMajor:0 DeviceMinor:880 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/495e0cff-fca8-4dad-9247-2fc0e7ce86fc/volumes/kubernetes.io~projected/kube-api-access-5qrqx DeviceMajor:0 DeviceMinor:885 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1151 DeviceMajor:0 DeviceMinor:1151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a6ab2be-d018-4fd5-bfbb-6b88aec28663/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b065df33-7911-456e-b3a2-1f8c8d53e053/volumes/kubernetes.io~projected/kube-api-access-pz26d DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/32c5cad9d5ce7a6a9868e1321b49281ebb4f7769c90afec706cbbbe9a7cdbdd6/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/07a4fd92-0fd1-4688-b2db-de615d75971e/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:98 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-580 DeviceMajor:0 DeviceMinor:580 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1198 DeviceMajor:0 DeviceMinor:1198 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-766 DeviceMajor:0 DeviceMinor:766 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fc5a9875-d97e-4371-a15d-a1f43b85abce/volumes/kubernetes.io~projected/kube-api-access-mvlvd DeviceMajor:0 DeviceMinor:473 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d31e16adf7f10cb16f9f4afb5a9c559f636c495a15abd8700657562f8afa08b/userdata/shm DeviceMajor:0 DeviceMinor:993 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0152b496baa88626f806c2cd8158beac6c11d9696ef03e334ab29bac73c88cbe/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7b07e88ac1eb70e2f8e0c7ac6bf4cc612d670ddad2d854d52139054ca73dfb7c/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-514 DeviceMajor:0 DeviceMinor:514 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b35ab145-16a7-4ef1-86e8-0afb6ff469fd/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:664 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3d8011493c530c7726e87839672927a640cefde6cc363dd89bea6af846b7008/userdata/shm DeviceMajor:0 DeviceMinor:374 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-593 DeviceMajor:0 DeviceMinor:593 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1bf9cb47892d0288027c6bb37223daf6c06c5b704eeeaa16637e3e622b28899a/userdata/shm DeviceMajor:0 DeviceMinor:779 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-656 DeviceMajor:0 DeviceMinor:656 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f40c8c2653002ea6e916a625294f3f884745ae3fd33ab733118256908cbb925/userdata/shm DeviceMajor:0 DeviceMinor:506 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e7b72267-fc08-41ed-a92b-9fca7372aba6/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:546 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-796 DeviceMajor:0 DeviceMinor:796 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1096 DeviceMajor:0 DeviceMinor:1096 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:552 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d52b6a2cf90645c7d7adbd4e26631b5105d0e2c63496bcbe09fc57752e328d79/userdata/shm DeviceMajor:0 DeviceMinor:741 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1204 DeviceMajor:0 DeviceMinor:1204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-868 DeviceMajor:0 DeviceMinor:868 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-820 DeviceMajor:0 DeviceMinor:820 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1174 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb/userdata/shm DeviceMajor:0 DeviceMinor:987 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d1339a30e998845d2411b5c92f3883b1457216fd5491cd19b8b7f3a77576f95c/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/29ba6765-61c9-4f78-8f44-570418000c5c/volumes/kubernetes.io~projected/kube-api-access-xchll DeviceMajor:0 DeviceMinor:332 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b9768e50-c883-47b0-b319-851fa53ac19a/volumes/kubernetes.io~projected/kube-api-access-bw5tw DeviceMajor:0 DeviceMinor:831 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/573d3a02-e395-4816-963a-cd614ef53f75/volumes/kubernetes.io~projected/kube-api-access-n959l DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-905 DeviceMajor:0 DeviceMinor:905 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e025d334-20e7-49 Mar 18 09:04:07.156730 master-0 kubenswrapper[28766]: 1f-8027-194251398747/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:554 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dea41e38002f15edc5a2abae54e8fefc1a70d4002c8cd87d39c7bc11a4255185/userdata/shm DeviceMajor:0 DeviceMinor:243 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-739 DeviceMajor:0 DeviceMinor:739 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-842 DeviceMajor:0 DeviceMinor:842 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1060 DeviceMajor:0 DeviceMinor:1060 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1109 DeviceMajor:0 DeviceMinor:1109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-840 DeviceMajor:0 DeviceMinor:840 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1022 DeviceMajor:0 DeviceMinor:1022 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-786 DeviceMajor:0 DeviceMinor:786 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3c7483d94d4b729fb2442b8f5c55aceeebc0aac5c97dd559a0179898c48164c2/userdata/shm DeviceMajor:0 DeviceMinor:49 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d69a2aa0453ffd9d52f608b0f589cc8cbacbdbc94e468d5326ece0a3282eddd/userdata/shm DeviceMajor:0 DeviceMinor:566 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-490 DeviceMajor:0 DeviceMinor:490 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/495e0cff-fca8-4dad-9247-2fc0e7ce86fc/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:884 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fa8f1797-0219-49fe-82b5-7416cc481c3a/volumes/kubernetes.io~projected/kube-api-access-njbjp DeviceMajor:0 DeviceMinor:408 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-622 DeviceMajor:0 DeviceMinor:622 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/260c8aa5-a288-4ee8-b671-f97e90a2f39c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-429 DeviceMajor:0 DeviceMinor:429 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2f2e86c1c0e64c2e65cdc84455f83de896f426c03295ce65094d278bb54d2594/userdata/shm DeviceMajor:0 DeviceMinor:434 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91da701859683e09bbd69c5ea46a27c0da629a0940ac397355b74f2e9d28cde0/userdata/shm DeviceMajor:0 DeviceMinor:808 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-882 DeviceMajor:0 DeviceMinor:882 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-264 DeviceMajor:0 DeviceMinor:264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18921497-d8ed-42d8-bf3c-a027566ebe85/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:489 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:479 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1009 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3e26fe3d2ca6df6dc0161bddc1b304ebbc7fa75a6def1dd10d9bdbbd5e6b79d/userdata/shm DeviceMajor:0 DeviceMinor:1177 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b48235a991ddd5e0dbc46936f4240a715253ffe775f0aa19da8ca60c7a3f2ca0/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1fbd15a6f55efb9df34e794516a926fbd6cd9758a5312e86f1eb743de9e13b5/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fc5a9875-d97e-4371-a15d-a1f43b85abce/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:464 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-454 DeviceMajor:0 DeviceMinor:454 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1056 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-525 DeviceMajor:0 DeviceMinor:525 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c1eb0a6c1ab17257358eeeb97010b410797c8ba9fd08a44d4ff2e76c51c917e0/userdata/shm DeviceMajor:0 DeviceMinor:621 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-907 DeviceMajor:0 DeviceMinor:907 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-932 DeviceMajor:0 DeviceMinor:932 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/818594107c19b8863e506e8d4f0498cc1facb30c01ff790168223f67dc1385ac/userdata/shm DeviceMajor:0 DeviceMinor:582 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04e23989-853e-4b49-ba0f-1961d64ae3a3/volumes/kubernetes.io~projected/kube-api-access-qwsfl DeviceMajor:0 DeviceMinor:757 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:773 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fcf89a76-7a94-46d3-853e-68e986563764/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1011 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/866c259c-7661-4a80-873b-6fd625218665/volumes/kubernetes.io~projected/kube-api-access-ftdvp DeviceMajor:0 DeviceMinor:266 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1794b726-5c0d-4a72-8ddd-418a2cbd8ded/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:772 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-888 DeviceMajor:0 DeviceMinor:888 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04e23989-853e-4b49-ba0f-1961d64ae3a3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:753 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-903 DeviceMajor:0 DeviceMinor:903 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1175 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-723 DeviceMajor:0 DeviceMinor:723 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e8459c0c82ddc5a6e864e94a80eda98d197ebe97363ec23c2d9041a3ae2c51bb/userdata/shm DeviceMajor:0 DeviceMinor:846 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/kube-api-access-tk9jq DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f826efe0-60a1-4465-b8d0-d4069ed507a1/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:379 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3d9fe248-ba87-47e3-911a-1b2b112b5683/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:550 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/15b9cae2d28df4fa59242b209b16efd412d30453ba1d9f0bfc42c07c896efdb2/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8d89af2f-47f5-4ee5-a790-e162c2dee3ce/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:630 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fc289a83-9a2e-404b-b148-605639362703/volumes/kubernetes.io~projected/kube-api-access-l7lrl DeviceMajor:0 DeviceMinor:303 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d4eadecdf9a3a2b8f4413e3b5de43801a78ed52767f124bb85a08953e8d985e4/userdata/shm DeviceMajor:0 DeviceMinor:778 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cab7f3dd54d1235751e5892dcbba68fcd420bde6fbdec0b1e4ae52ac6f473f51/userdata/shm DeviceMajor:0 DeviceMinor:1046 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1035 DeviceMajor:0 DeviceMinor:1035 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b42865dcd2dae3a2390972bbf267cd467643023a4c8d222016e0b44a61943afc/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~projected/kube-api-access-bpj79 DeviceMajor:0 DeviceMinor:450 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~projected/kube-api-access-dfjmx DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-620 DeviceMajor:0 DeviceMinor:620 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52e32e2d-33ab-4351-ae8a-80acd6077d70/volumes/kubernetes.io~projected/kube-api-access-dm6nf DeviceMajor:0 DeviceMinor:535 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b9768e50-c883-47b0-b319-851fa53ac19a/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:818 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5982111d-f4c6-4335-9b40-3142758fc2bc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/301f04aeb1003f5e8d27049d79ee0b80e5fce89b95da440a253b676b3418f0d1/userdata/shm DeviceMajor:0 DeviceMinor:246 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/volumes/kubernetes.io~projected/kube-api-access-2m5wf DeviceMajor:0 DeviceMinor:756 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1110 DeviceMajor:0 DeviceMinor:1110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6fb1f871-9c24-48a1-a15a-a636b5bb687d/volumes/kubernetes.io~projected/kube-api-access-wxxcn DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-921 DeviceMajor:0 DeviceMinor:921 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-271 DeviceMajor:0 DeviceMinor:271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-941 DeviceMajor:0 DeviceMinor:941 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06cbd48a-1f1d-4734-8d57-e1b6824879b6/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1070 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4dced598bcd2040f1c605c245256a2161b2f459ac4faa81c6af5275d4099b859/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ec11012b-536a-422f-afc4-d2d0fd4b67fb/volumes/kubernetes.io~projected/kube-api-access-svdhs DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b273b68e51f7dadf9df698a73d4ce02f6814882dc729b2c52672e829413c2a75/userdata/shm DeviceMajor:0 DeviceMinor:558 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4fb480fe238d2202b063fb165afa539e61290f53ee162d859e36d1d4cd81bfd5/userdata/shm DeviceMajor:0 DeviceMinor:475 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:442 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e0d127be-2d13-449b-915b-2d49052baf02/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:798 Capacity:200003584 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b5f9f50b-e7b4-4b81-864b-349303f21447/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:449 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59d50dd5-6793-4f96-a769-31e086ecc7e4/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:541 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-595 DeviceMajor:0 DeviceMinor:595 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4146a62d-e37b-4295-90ca-b23f5e3d1112/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1075 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-417 DeviceMajor:0 DeviceMinor:417 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c1e8680fcd730f22fac4464d7e2e919f0d68259c2072f7e2c075736c7c9f888d/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7962fb40-1170-4c00-b1bf-92966aeae807/volumes/kubernetes.io~projected/kube-api-access-47p9x DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c110b293-2c6b-496b-b015-23aada98cb4b/volumes/kubernetes.io~projected/kube-api-access-lw27k DeviceMajor:0 DeviceMinor:256 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-61 DeviceMajor:0 DeviceMinor:61 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9d070f228bb3ad86327355b7631ce9d61aa33df655c8f354c0c3cf73e6bbfbd/userdata/shm DeviceMajor:0 DeviceMinor:1194 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/862f349be451274c2786c24620a1b3df5221d5b66e16cc9b0099daecc5ae9693/userdata/shm DeviceMajor:0 DeviceMinor:809 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2700f537-8f31-4380-a527-3e697a8122cc/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:483 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:755 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1137 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bd1fd64f6f95cdc3189bd097dac24d4300572f6ab92c972496e95007ac8e621a/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e5ae1886-f90c-49f4-bf08-055b55dd785a/volumes/kubernetes.io~projected/kube-api-access-4fql4 DeviceMajor:0 DeviceMinor:1176 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-677 DeviceMajor:0 DeviceMinor:677 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad4cf9b2-4e66-4921-a30c-7b659bff06ab/volumes/kubernetes.io~projected/kube-api-access-zkfql DeviceMajor:0 DeviceMinor:1012 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d71aa1b9-6eb5-4331-b959-8930e10817b4/volumes/kubernetes.io~projected/kube-api-access-x5q4t DeviceMajor:0 DeviceMinor:1057 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/edc7f629-4288-443b-aa8e-78bc6a09c848/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/21254471a19094b73e6733114f96329319386cc402e4cbd645f5a024b798fc80/userdata/shm DeviceMajor:0 DeviceMinor:783 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-601 DeviceMajor:0 DeviceMinor:601 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6634f9815dab75e36ab077ad26870775c6b66428323ea93fb4028cdabc9be608/userdata/shm DeviceMajor:0 DeviceMinor:776 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/998cabe9-d479-439f-b1c0-1d8c49aefeb9/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1010 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:547 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/68465463-5f2a-4e74-9c34-2706a185f7ea/volumes/kubernetes.io~projected/kube-api-access-gqlhh DeviceMajor:0 DeviceMinor:732 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d0272f7c-bedc-44cf-9790-88e10e6dda03/volumes/kubernetes.io~projected/kube-api-access-ttnk9 DeviceMajor:0 DeviceMinor:433 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-635 DeviceMajor:0 DeviceMinor:635 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/176bf98298dce9ebeff9e6cf55f250f7b8583bdf4845838e239879972b0093f1/userdata/shm DeviceMajor:0 DeviceMinor:571 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1161 DeviceMajor:0 DeviceMinor:1161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8/volumes/kubernetes.io~projected/kube-api-access-d2bwv DeviceMajor:0 DeviceMinor:372 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-679 DeviceMajor:0 DeviceMinor:679 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-263 DeviceMajor:0 DeviceMinor:263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/772bc250-2e57-4ce0-883c-d44281fcb0be/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-853 DeviceMajor:0 DeviceMinor:853 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1064 DeviceMajor:0 DeviceMinor:1064 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1123 DeviceMajor:0 DeviceMinor:1123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-380 DeviceMajor:0 DeviceMinor:380 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/939efa41-8f40-4f91-bee4-0425aead9760/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-438 DeviceMajor:0 DeviceMinor:438 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:01fc205ca60889e MacAddress:9e:74:7a:9f:86:c7 Speed:10000 Mtu:8900} {Name:08c69ca72893cd8 MacAddress:22:03:76:13:c6:b0 Speed:10000 Mtu:8900} {Name:0abbacca379cb1a MacAddress:52:47:ef:49:97:f9 Speed:10000 Mtu:8900} {Name:0cdcdcd2ccccdeb MacAddress:56:46:8c:d9:4f:19 Speed:10000 Mtu:8900} {Name:15b9cae2d28df4f MacAddress:0e:70:cb:72:3c:9b Speed:10000 Mtu:8900} {Name:176bf98298dce9e MacAddress:8a:c8:d8:20:d9:5d Speed:10000 Mtu:8900} {Name:17e72118bc9a21c MacAddress:4a:ba:a5:70:a5:2e Speed:10000 Mtu:8900} {Name:1bf9cb47892d028 MacAddress:ca:cc:56:73:d2:3f Speed:10000 Mtu:8900} {Name:21254471a19094b MacAddress:2e:a7:02:23:64:6c Speed:10000 Mtu:8900} {Name:23865ef5bfea471 MacAddress:8a:75:02:d0:2e:2c Speed:10000 Mtu:8900} {Name:26ecaeebed65d3c MacAddress:82:bf:d4:32:ca:25 Speed:10000 Mtu:8900} {Name:27e819688a289fa MacAddress:72:6e:d5:b1:c8:aa Speed:10000 Mtu:8900} {Name:2c337c890296858 MacAddress:76:cb:a7:95:23:4c Speed:10000 Mtu:8900} {Name:2e229ef6f57fea8 MacAddress:4a:10:80:82:2f:ae Speed:10000 Mtu:8900} {Name:2f2e86c1c0e64c2 MacAddress:8a:70:cc:0c:da:dd Speed:10000 Mtu:8900} {Name:301f04aeb1003f5 MacAddress:1e:8b:2c:0f:33:12 Speed:10000 Mtu:8900} {Name:35bb7224fe9eca6 MacAddress:6e:87:8c:1e:8e:8c Speed:10000 Mtu:8900} {Name:3c4e15b0e2e376b MacAddress:ba:ad:6f:d5:fb:d1 Speed:10000 Mtu:8900} {Name:4cc1a3bde7a78af MacAddress:be:43:68:b6:4e:b1 Speed:10000 Mtu:8900} {Name:4fb480fe238d220 MacAddress:aa:d2:c9:b9:7a:d0 Speed:10000 Mtu:8900} {Name:52447280dead3b5 MacAddress:e2:2c:83:8a:f4:22 Speed:10000 Mtu:8900} {Name:548400f1bcdf7de MacAddress:fa:5a:3d:ac:e4:85 Speed:10000 Mtu:8900} {Name:55b41391fdb5cf2 MacAddress:de:d7:4b:61:4f:0a Speed:10000 Mtu:8900} {Name:5a2943917dc38b0 MacAddress:2a:a8:1a:92:96:b4 Speed:10000 Mtu:8900} {Name:6634f9815dab75e MacAddress:ba:00:34:76:7b:8d Speed:10000 Mtu:8900} {Name:6f40c8c2653002e MacAddress:c2:e8:f8:2a:91:df Speed:10000 Mtu:8900} {Name:7375d00faec570b MacAddress:66:63:1b:21:76:59 Speed:10000 Mtu:8900} {Name:7b07e88ac1eb70e MacAddress:56:39:e8:6f:8e:d7 Speed:10000 Mtu:8900} {Name:7b6fb81fa9b3775 MacAddress:12:3a:b1:0f:ca:b2 Speed:10000 Mtu:8900} {Name:7d31e16adf7f10c MacAddress:de:36:b1:a7:dd:ed Speed:10000 Mtu:8900} {Name:7d69a2aa0453ffd MacAddress:6a:57:7c:42:a6:33 Speed:10000 Mtu:8900} {Name:818594107c19b88 MacAddress:76:50:f3:70:4b:2e Speed:10000 Mtu:8900} {Name:837527d2f9f7319 MacAddress:32:24:7e:2a:79:57 Speed:10000 Mtu:8900} {Name:88d505327814e64 MacAddress:0a:6a:46:a6:4c:a2 Speed:10000 Mtu:8900} {Name:8f214df22b3108e MacAddress:ca:94:67:10:32:e5 Speed:10000 Mtu:8900} {Name:91da701859683e0 MacAddress:82:1a:05:59:d2:cf Speed:10000 Mtu:8900} {Name:95171c03fc7a28c MacAddress:4a:d5:36:2d:a4:93 Speed:10000 Mtu:8900} {Name:a058ca3e613163c MacAddress:e2:d5:2d:04:db:05 Speed:10000 Mtu:8900} {Name:a9d070f228bb3ad MacAddress:de:a0:42:25:72:81 Speed:10000 Mtu:8900} {Name:b273b68e51f7dad MacAddress:8e:c0:f2:ba:d2:ee Speed:10000 Mtu:8900} {Name:b42865dcd2dae3a MacAddress:7e:5e:62:83:dc:bc Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:ea:35:c0:05:b7:9e Speed:0 Mtu:8900} {Name:c1eb0a6c1ab1725 MacAddress:7e:64:9e:28:8f:de Speed:10000 Mtu:8900} {Name:c28524ce9ebb8a8 MacAddress:c6:a4:8f:4a:4f:7f Speed:10000 Mtu:8900} {Name:c62bfe26cbaa5af MacAddress:6e:37:cf:73:fd:ca Speed:10000 Mtu:8900} {Name:c6f3ba629d26f9c MacAddress:8a:c8:2a:29:a7:27 Speed:10000 Mtu:8900} {Name:d1339a30e998845 MacAddress:46:03:34:98:c0:93 Speed:10000 Mtu:8900} {Name:d3d8011493c530c MacAddress:c2:ff:24:0e:10:62 Speed:10000 Mtu:8900} {Name:d4eadecdf9a3a2b MacAddress:2e:a5:a9:3a:1a:3c Speed:10000 Mtu:8900} {Name:d52b6a2cf90645c MacAddress:3e:2a:40:94:36:ac Speed:10000 Mtu:8900} {Name:d9f6591fd179f08 MacAddress:2a:a4:2e:4f:91:e8 Speed:10000 Mtu:8900} {Name:dea41e38002f15e MacAddress:aa:60:50:bf:12:e2 Speed:10000 Mtu:8900} {Name:e8459c0c82ddc5a MacAddress:7a:00:c7:88:a1:b8 Speed:10000 Mtu:8900} {Name:ea77244427e21f1 MacAddress:f2:ce:d5:8a:1e:46 Speed:10000 Mtu:8900} {Name:ea87280c188a798 MacAddress:b6:1a:af:8a:19:3d Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:cd:49:09 Speed:-1 Mtu:9000} {Name:f1fbd15a6f55efb MacAddress:2a:63:41:85:8e:2a Speed:10000 Mtu:8900} {Name:f3e26fe3d2ca6df MacAddress:52:e0:f3:69:c3:f7 Speed:10000 Mtu:8900} {Name:fb93ae4071b1469 MacAddress:76:74:d9:98:4c:55 Speed:10000 Mtu:8900} {Name:fef2da050284c5b MacAddress:9e:bb:19:48:d5:92 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:76:d1:4e:31:92:01 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 09:04:07.156730 master-0 kubenswrapper[28766]: I0318 09:04:07.155944 28766 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 09:04:07.156730 master-0 kubenswrapper[28766]: I0318 09:04:07.156033 28766 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 09:04:07.156730 master-0 kubenswrapper[28766]: I0318 09:04:07.156562 28766 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 09:04:07.157196 master-0 kubenswrapper[28766]: I0318 09:04:07.156793 28766 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 09:04:07.157196 master-0 kubenswrapper[28766]: I0318 09:04:07.156840 28766 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 09:04:07.157196 master-0 kubenswrapper[28766]: I0318 09:04:07.157154 28766 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 09:04:07.157196 master-0 kubenswrapper[28766]: I0318 09:04:07.157166 28766 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 09:04:07.157196 master-0 kubenswrapper[28766]: I0318 09:04:07.157178 28766 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 09:04:07.157434 master-0 kubenswrapper[28766]: I0318 09:04:07.157208 28766 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 09:04:07.157434 master-0 kubenswrapper[28766]: I0318 09:04:07.157256 28766 state_mem.go:36] "Initialized new in-memory state store" Mar 18 09:04:07.157434 master-0 kubenswrapper[28766]: I0318 09:04:07.157366 28766 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 09:04:07.157656 master-0 kubenswrapper[28766]: I0318 09:04:07.157450 28766 kubelet.go:418] "Attempting to sync node with API server" Mar 18 09:04:07.157656 master-0 kubenswrapper[28766]: I0318 09:04:07.157470 28766 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 09:04:07.157656 master-0 kubenswrapper[28766]: I0318 09:04:07.157493 28766 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 09:04:07.157656 master-0 kubenswrapper[28766]: I0318 09:04:07.157512 28766 kubelet.go:324] "Adding apiserver pod source" Mar 18 09:04:07.157656 master-0 kubenswrapper[28766]: I0318 09:04:07.157539 28766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 09:04:07.162513 master-0 kubenswrapper[28766]: W0318 09:04:07.162284 28766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:07.162740 master-0 kubenswrapper[28766]: E0318 09:04:07.162537 28766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:04:07.163062 master-0 kubenswrapper[28766]: I0318 09:04:07.162973 28766 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 09:04:07.163139 master-0 kubenswrapper[28766]: W0318 09:04:07.162954 28766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:07.163768 master-0 kubenswrapper[28766]: E0318 09:04:07.163695 28766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:04:07.163945 master-0 kubenswrapper[28766]: I0318 09:04:07.163884 28766 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 09:04:07.164362 master-0 kubenswrapper[28766]: I0318 09:04:07.164322 28766 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 09:04:07.164563 master-0 kubenswrapper[28766]: I0318 09:04:07.164523 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 09:04:07.164563 master-0 kubenswrapper[28766]: I0318 09:04:07.164560 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 09:04:07.164720 master-0 kubenswrapper[28766]: I0318 09:04:07.164598 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 09:04:07.164720 master-0 kubenswrapper[28766]: I0318 09:04:07.164613 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 09:04:07.164720 master-0 kubenswrapper[28766]: I0318 09:04:07.164626 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 09:04:07.164720 master-0 kubenswrapper[28766]: I0318 09:04:07.164639 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 09:04:07.164720 master-0 kubenswrapper[28766]: I0318 09:04:07.164653 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 09:04:07.164720 master-0 kubenswrapper[28766]: I0318 09:04:07.164665 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 09:04:07.165238 master-0 kubenswrapper[28766]: I0318 09:04:07.164734 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 09:04:07.165238 master-0 kubenswrapper[28766]: I0318 09:04:07.164751 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 09:04:07.165238 master-0 kubenswrapper[28766]: I0318 09:04:07.164773 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 09:04:07.165238 master-0 kubenswrapper[28766]: I0318 09:04:07.164796 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 09:04:07.165238 master-0 kubenswrapper[28766]: I0318 09:04:07.164879 28766 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 09:04:07.166199 master-0 kubenswrapper[28766]: I0318 09:04:07.165681 28766 server.go:1280] "Started kubelet" Mar 18 09:04:07.166199 master-0 kubenswrapper[28766]: I0318 09:04:07.165765 28766 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 09:04:07.166199 master-0 kubenswrapper[28766]: I0318 09:04:07.165958 28766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 09:04:07.166391 master-0 kubenswrapper[28766]: I0318 09:04:07.166308 28766 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 09:04:07.166459 master-0 kubenswrapper[28766]: I0318 09:04:07.166388 28766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:07.167090 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 09:04:07.168025 master-0 kubenswrapper[28766]: E0318 09:04:07.166973 28766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de41e5423e52b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:04:07.165633835 +0000 UTC m=+0.179892511,LastTimestamp:2026-03-18 09:04:07.165633835 +0000 UTC m=+0.179892511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:04:07.169537 master-0 kubenswrapper[28766]: I0318 09:04:07.169487 28766 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 09:04:07.170469 master-0 kubenswrapper[28766]: I0318 09:04:07.170432 28766 server.go:449] "Adding debug handlers to kubelet server" Mar 18 09:04:07.183410 master-0 kubenswrapper[28766]: I0318 09:04:07.183237 28766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 09:04:07.183410 master-0 kubenswrapper[28766]: I0318 09:04:07.183319 28766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 09:04:07.184551 master-0 kubenswrapper[28766]: I0318 09:04:07.183714 28766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 08:38:09 +0000 UTC, rotation deadline is 2026-03-19 05:02:58.016919994 +0000 UTC Mar 18 09:04:07.184551 master-0 kubenswrapper[28766]: I0318 09:04:07.183793 28766 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h58m50.833130585s for next certificate rotation Mar 18 09:04:07.184946 master-0 kubenswrapper[28766]: I0318 09:04:07.184763 28766 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 09:04:07.184946 master-0 kubenswrapper[28766]: I0318 09:04:07.184795 28766 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 09:04:07.184946 master-0 kubenswrapper[28766]: I0318 09:04:07.184838 28766 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 09:04:07.184946 master-0 kubenswrapper[28766]: E0318 09:04:07.184878 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.185709 master-0 kubenswrapper[28766]: I0318 09:04:07.185675 28766 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 09:04:07.185709 master-0 kubenswrapper[28766]: I0318 09:04:07.185709 28766 factory.go:55] Registering systemd factory Mar 18 09:04:07.185881 master-0 kubenswrapper[28766]: I0318 09:04:07.185722 28766 factory.go:221] Registration of the systemd container factory successfully Mar 18 09:04:07.186368 master-0 kubenswrapper[28766]: W0318 09:04:07.186159 28766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:07.186368 master-0 kubenswrapper[28766]: E0318 09:04:07.186235 28766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:04:07.186368 master-0 kubenswrapper[28766]: I0318 09:04:07.186277 28766 factory.go:153] Registering CRI-O factory Mar 18 09:04:07.186368 master-0 kubenswrapper[28766]: I0318 09:04:07.186351 28766 factory.go:221] Registration of the crio container factory successfully Mar 18 09:04:07.186629 master-0 kubenswrapper[28766]: I0318 09:04:07.186390 28766 factory.go:103] Registering Raw factory Mar 18 09:04:07.186629 master-0 kubenswrapper[28766]: I0318 09:04:07.186516 28766 manager.go:1196] Started watching for new ooms in manager Mar 18 09:04:07.187953 master-0 kubenswrapper[28766]: I0318 09:04:07.187888 28766 manager.go:319] Starting recovery of all containers Mar 18 09:04:07.190902 master-0 kubenswrapper[28766]: E0318 09:04:07.190787 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 09:04:07.191564 master-0 kubenswrapper[28766]: E0318 09:04:07.191481 28766 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 09:04:07.213667 master-0 kubenswrapper[28766]: I0318 09:04:07.213561 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71aa1b9-6eb5-4331-b959-8930e10817b4" volumeName="kubernetes.io/projected/d71aa1b9-6eb5-4331-b959-8930e10817b4-kube-api-access-x5q4t" seLinuxMountContext="" Mar 18 09:04:07.213667 master-0 kubenswrapper[28766]: I0318 09:04:07.213655 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0bb044f-5a4e-4981-8084-91348ce1a56a" volumeName="kubernetes.io/projected/e0bb044f-5a4e-4981-8084-91348ce1a56a-kube-api-access-ks4jl" seLinuxMountContext="" Mar 18 09:04:07.213667 master-0 kubenswrapper[28766]: I0318 09:04:07.213676 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fbd379-dd1e-4287-bd76-fd3ec51cde43" volumeName="kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-kube-api-access-c52pj" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213693 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5320a1da-262a-4b1b-93b4-1df9d4c26eec" volumeName="kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213714 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7962fb40-1170-4c00-b1bf-92966aeae807" volumeName="kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213728 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213754 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a" volumeName="kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213777 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213829 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="495e0cff-fca8-4dad-9247-2fc0e7ce86fc" volumeName="kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213923 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a6ab2be-d018-4fd5-bfbb-6b88aec28663" volumeName="kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213947 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" volumeName="kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213972 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6fe8ee6-737e-438a-8d9d-1ec712f6bacf" volumeName="kubernetes.io/projected/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-kube-api-access-czm78" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.213991 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a268d595-18c2-43a2-8ed5-eb64c76c490f" volumeName="kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-utilities" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.214015 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f826efe0-60a1-4465-b8d0-d4069ed507a1" volumeName="kubernetes.io/projected/f826efe0-60a1-4465-b8d0-d4069ed507a1-kube-api-access-6bzxp" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.214031 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="260c8aa5-a288-4ee8-b671-f97e90a2f39c" volumeName="kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.214062 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" volumeName="kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.214085 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" volumeName="kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 09:04:07.214084 master-0 kubenswrapper[28766]: I0318 09:04:07.214105 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" volumeName="kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214129 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31a92270-efed-44fe-871e-90333235e85f" volumeName="kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214146 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31a92270-efed-44fe-871e-90333235e85f" volumeName="kubernetes.io/projected/31a92270-efed-44fe-871e-90333235e85f-kube-api-access-8zhfh" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214166 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92542f7c-182b-45a8-bbf3-00e99ba7acee" volumeName="kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-utilities" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214189 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e025d334-20e7-491f-8027-194251398747" volumeName="kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214222 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65344cd-8571-4a78-927f-eec46ec1af51" volumeName="kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-utilities" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214245 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f826efe0-60a1-4465-b8d0-d4069ed507a1" volumeName="kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-tuned" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214261 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa8f1797-0219-49fe-82b5-7416cc481c3a" volumeName="kubernetes.io/configmap/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-cabundle" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214292 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="40f3b7a4-107c-4f1d-a3ab-b5d2309c373b" volumeName="kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214313 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="40f3b7a4-107c-4f1d-a3ab-b5d2309c373b" volumeName="kubernetes.io/projected/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-kube-api-access-jnspk" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214335 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d9fe248-ba87-47e3-911a-1b2b112b5683" volumeName="kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214366 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4146a62d-e37b-4295-90ca-b23f5e3d1112" volumeName="kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214397 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f650e6f0-fb74-4083-a7a9-fa4df513108f" volumeName="kubernetes.io/projected/f650e6f0-fb74-4083-a7a9-fa4df513108f-kube-api-access-tsc6v" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214415 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18921497-d8ed-42d8-bf3c-a027566ebe85" volumeName="kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214433 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" volumeName="kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214450 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214474 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="998cabe9-d479-439f-b1c0-1d8c49aefeb9" volumeName="kubernetes.io/secret/998cabe9-d479-439f-b1c0-1d8c49aefeb9-tls-certificates" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214494 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="336e741d-ac9a-4b94-9fbb-c9010e37c2d0" volumeName="kubernetes.io/projected/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-kube-api-access-hbsfs" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214517 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4146a62d-e37b-4295-90ca-b23f5e3d1112" volumeName="kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214551 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a5c021-23c3-4a97-b5f3-77fd6dcba1ab" volumeName="kubernetes.io/empty-dir/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-cache" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214567 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e96b35f-c57a-4e01-82f7-894ea16ac5b8" volumeName="kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214581 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0280499-8277-46f0-bd8c-058a47a99e19" volumeName="kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214598 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ccf74af5-d4fd-4ed3-9784-42397ea798c5" volumeName="kubernetes.io/projected/ccf74af5-d4fd-4ed3-9784-42397ea798c5-kube-api-access-p9qkd" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214633 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d633c5-e0aa-4fb6-83e0-a2e976334406" volumeName="kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214657 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1794b726-5c0d-4a72-8ddd-418a2cbd8ded" volumeName="kubernetes.io/empty-dir/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-tmpfs" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214672 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5320a1da-262a-4b1b-93b4-1df9d4c26eec" volumeName="kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214685 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7dab805-612b-404c-ab97-8cee927169db" volumeName="kubernetes.io/projected/a7dab805-612b-404c-ab97-8cee927169db-kube-api-access-pjrfz" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214702 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-client" seLinuxMountContext="" Mar 18 09:04:07.214693 master-0 kubenswrapper[28766]: I0318 09:04:07.214729 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7b72267-fc08-41ed-a92b-9fca7372aba6" volumeName="kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214746 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec11012b-536a-422f-afc4-d2d0fd4b67fb" volumeName="kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214767 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="336e741d-ac9a-4b94-9fbb-c9010e37c2d0" volumeName="kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214780 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4146a62d-e37b-4295-90ca-b23f5e3d1112" volumeName="kubernetes.io/projected/4146a62d-e37b-4295-90ca-b23f5e3d1112-kube-api-access-4r7hx" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214816 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2ade7e6-cecd-4e98-8f85-ea8219303d75" volumeName="kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214836 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc5379c-651f-490c-90f4-1285b9093596" volumeName="kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214884 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31a92270-efed-44fe-871e-90333235e85f" volumeName="kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214907 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a6ab2be-d018-4fd5-bfbb-6b88aec28663" volumeName="kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214938 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9768e50-c883-47b0-b319-851fa53ac19a" volumeName="kubernetes.io/projected/b9768e50-c883-47b0-b319-851fa53ac19a-kube-api-access-bw5tw" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214966 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5ae1886-f90c-49f4-bf08-055b55dd785a" volumeName="kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.214982 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e64ea71a-1e89-409a-9607-4d3cea093643" volumeName="kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215030 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5982111d-f4c6-4335-9b40-3142758fc2bc" volumeName="kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215055 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7962fb40-1170-4c00-b1bf-92966aeae807" volumeName="kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215218 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edc7f629-4288-443b-aa8e-78bc6a09c848" volumeName="kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215259 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="260c8aa5-a288-4ee8-b671-f97e90a2f39c" volumeName="kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215331 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a" volumeName="kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215366 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6fb1f871-9c24-48a1-a15a-a636b5bb687d" volumeName="kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215396 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="866c259c-7661-4a80-873b-6fd625218665" volumeName="kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215423 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06cbd48a-1f1d-4734-8d57-e1b6824879b6" volumeName="kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215440 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="40f3b7a4-107c-4f1d-a3ab-b5d2309c373b" volumeName="kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215457 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71aa1b9-6eb5-4331-b959-8930e10817b4" volumeName="kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215569 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edc7f629-4288-443b-aa8e-78bc6a09c848" volumeName="kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215660 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4" volumeName="kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215727 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d633c5-e0aa-4fb6-83e0-a2e976334406" volumeName="kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215808 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-serving-ca" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215832 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf89a76-7a94-46d3-853e-68e986563764" volumeName="kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.216027 master-0 kubenswrapper[28766]: I0318 09:04:07.215921 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68465463-5f2a-4e74-9c34-2706a185f7ea" volumeName="kubernetes.io/projected/68465463-5f2a-4e74-9c34-2706a185f7ea-kube-api-access-gqlhh" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.215971 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0280499-8277-46f0-bd8c-058a47a99e19" volumeName="kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217292 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5ae1886-f90c-49f4-bf08-055b55dd785a" volumeName="kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217314 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2700f537-8f31-4380-a527-3e697a8122cc" volumeName="kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-etcd-serving-ca" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217328 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5320a1da-262a-4b1b-93b4-1df9d4c26eec" volumeName="kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217345 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0bb044f-5a4e-4981-8084-91348ce1a56a" volumeName="kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217408 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9fa104a-4979-4023-8d7e-a965f11bc7db" volumeName="kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217423 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2700f537-8f31-4380-a527-3e697a8122cc" volumeName="kubernetes.io/projected/2700f537-8f31-4380-a527-3e697a8122cc-kube-api-access-dqldd" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217439 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217457 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" volumeName="kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217472 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-encryption-config" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217488 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec11012b-536a-422f-afc4-d2d0fd4b67fb" volumeName="kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217502 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2700f537-8f31-4380-a527-3e697a8122cc" volumeName="kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-etcd-client" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217518 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d9fe248-ba87-47e3-911a-1b2b112b5683" volumeName="kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217533 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217550 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7dab805-612b-404c-ab97-8cee927169db" volumeName="kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217621 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7b72267-fc08-41ed-a92b-9fca7372aba6" volumeName="kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217640 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edc7f629-4288-443b-aa8e-78bc6a09c848" volumeName="kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217662 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07a4fd92-0fd1-4688-b2db-de615d75971e" volumeName="kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217681 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2700f537-8f31-4380-a527-3e697a8122cc" volumeName="kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-audit-policies" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217704 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" volumeName="kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217727 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b065df33-7911-456e-b3a2-1f8c8d53e053" volumeName="kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217748 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9fa104a-4979-4023-8d7e-a965f11bc7db" volumeName="kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217768 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e96b35f-c57a-4e01-82f7-894ea16ac5b8" volumeName="kubernetes.io/projected/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-kube-api-access-rgs9m" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217789 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91a6fa86-8c58-43bc-a2d4-2b20901269f7" volumeName="kubernetes.io/projected/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-api-access-rpxfc" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217810 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a268d595-18c2-43a2-8ed5-eb64c76c490f" volumeName="kubernetes.io/projected/a268d595-18c2-43a2-8ed5-eb64c76c490f-kube-api-access-hfzdp" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217870 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b065df33-7911-456e-b3a2-1f8c8d53e053" volumeName="kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217893 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b35ab145-16a7-4ef1-86e8-0afb6ff469fd" volumeName="kubernetes.io/projected/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-kube-api-access-tp77s" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217914 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92542f7c-182b-45a8-bbf3-00e99ba7acee" volumeName="kubernetes.io/projected/92542f7c-182b-45a8-bbf3-00e99ba7acee-kube-api-access-4lv7n" seLinuxMountContext="" Mar 18 09:04:07.217902 master-0 kubenswrapper[28766]: I0318 09:04:07.217934 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.217951 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5ae1886-f90c-49f4-bf08-055b55dd785a" volumeName="kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.217967 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65344cd-8571-4a78-927f-eec46ec1af51" volumeName="kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-catalog-content" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.217983 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2700f537-8f31-4380-a527-3e697a8122cc" volumeName="kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-encryption-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218124 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218165 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0272f7c-bedc-44cf-9790-88e10e6dda03" volumeName="kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218198 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71aa1b9-6eb5-4331-b959-8930e10817b4" volumeName="kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218227 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" volumeName="kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218250 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0d127be-2d13-449b-915b-2d49052baf02" volumeName="kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218278 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2ade7e6-cecd-4e98-8f85-ea8219303d75" volumeName="kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218295 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218313 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59d50dd5-6793-4f96-a769-31e086ecc7e4" volumeName="kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218331 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-audit" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218346 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0272f7c-bedc-44cf-9790-88e10e6dda03" volumeName="kubernetes.io/projected/d0272f7c-bedc-44cf-9790-88e10e6dda03-kube-api-access-ttnk9" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218362 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa8f1797-0219-49fe-82b5-7416cc481c3a" volumeName="kubernetes.io/secret/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-key" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218377 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a6ab2be-d018-4fd5-bfbb-6b88aec28663" volumeName="kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218391 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d89af2f-47f5-4ee5-a790-e162c2dee3ce" volumeName="kubernetes.io/secret/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218408 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9fa104a-4979-4023-8d7e-a965f11bc7db" volumeName="kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218425 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc289a83-9a2e-404b-b148-605639362703" volumeName="kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218441 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc5379c-651f-490c-90f4-1285b9093596" volumeName="kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218466 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31a92270-efed-44fe-871e-90333235e85f" volumeName="kubernetes.io/empty-dir/31a92270-efed-44fe-871e-90333235e85f-snapshots" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218495 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7dab805-612b-404c-ab97-8cee927169db" volumeName="kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218515 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="495e0cff-fca8-4dad-9247-2fc0e7ce86fc" volumeName="kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218538 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" volumeName="kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218561 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a268d595-18c2-43a2-8ed5-eb64c76c490f" volumeName="kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-catalog-content" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218579 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" volumeName="kubernetes.io/projected/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-kube-api-access-zkfql" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218597 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" volumeName="kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-default-certificate" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218613 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218629 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07a4fd92-0fd1-4688-b2db-de615d75971e" volumeName="kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218649 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218664 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5320a1da-262a-4b1b-93b4-1df9d4c26eec" volumeName="kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218679 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9fa104a-4979-4023-8d7e-a965f11bc7db" volumeName="kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218696 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="260c8aa5-a288-4ee8-b671-f97e90a2f39c" volumeName="kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218744 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec11012b-536a-422f-afc4-d2d0fd4b67fb" volumeName="kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218763 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a" volumeName="kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218780 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e64ea71a-1e89-409a-9607-4d3cea093643" volumeName="kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218796 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d633c5-e0aa-4fb6-83e0-a2e976334406" volumeName="kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218812 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5982111d-f4c6-4335-9b40-3142758fc2bc" volumeName="kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218829 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b35ab145-16a7-4ef1-86e8-0afb6ff469fd" volumeName="kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218901 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29" volumeName="kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218917 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5ae1886-f90c-49f4-bf08-055b55dd785a" volumeName="kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218940 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e64ea71a-1e89-409a-9607-4d3cea093643" volumeName="kubernetes.io/projected/e64ea71a-1e89-409a-9607-4d3cea093643-kube-api-access-b689k" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218957 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="573d3a02-e395-4816-963a-cd614ef53f75" volumeName="kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218972 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d89af2f-47f5-4ee5-a790-e162c2dee3ce" volumeName="kubernetes.io/projected/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-kube-api-access" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.218990 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91a6fa86-8c58-43bc-a2d4-2b20901269f7" volumeName="kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219005 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7b72267-fc08-41ed-a92b-9fca7372aba6" volumeName="kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219021 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="573d3a02-e395-4816-963a-cd614ef53f75" volumeName="kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219038 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7962fb40-1170-4c00-b1bf-92966aeae807" volumeName="kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219055 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="866c259c-7661-4a80-873b-6fd625218665" volumeName="kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219070 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219087 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e025d334-20e7-491f-8027-194251398747" volumeName="kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219107 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5320a1da-262a-4b1b-93b4-1df9d4c26eec" volumeName="kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219122 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="772bc250-2e57-4ce0-883c-d44281fcb0be" volumeName="kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219138 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f826efe0-60a1-4465-b8d0-d4069ed507a1" volumeName="kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-tmp" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219154 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf89a76-7a94-46d3-853e-68e986563764" volumeName="kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219172 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b0280499-8277-46f0-bd8c-058a47a99e19" volumeName="kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219188 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-image-import-ca" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219204 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219221 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5ae1886-f90c-49f4-bf08-055b55dd785a" volumeName="kubernetes.io/projected/e5ae1886-f90c-49f4-bf08-055b55dd785a-kube-api-access-4fql4" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219238 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ccf74af5-d4fd-4ed3-9784-42397ea798c5" volumeName="kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219269 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffc5379c-651f-490c-90f4-1285b9093596" volumeName="kubernetes.io/projected/ffc5379c-651f-490c-90f4-1285b9093596-kube-api-access-4vfrs" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219285 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1794b726-5c0d-4a72-8ddd-418a2cbd8ded" volumeName="kubernetes.io/projected/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-kube-api-access-gjq4w" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219302 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" volumeName="kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219320 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" volumeName="kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219336 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="573d3a02-e395-4816-963a-cd614ef53f75" volumeName="kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219351 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06cbd48a-1f1d-4734-8d57-e1b6824879b6" volumeName="kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219366 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a5c021-23c3-4a97-b5f3-77fd6dcba1ab" volumeName="kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-ca-certs" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219385 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a5c021-23c3-4a97-b5f3-77fd6dcba1ab" volumeName="kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-kube-api-access-fbsgx" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219400 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219417 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ccf74af5-d4fd-4ed3-9784-42397ea798c5" volumeName="kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219432 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18921497-d8ed-42d8-bf3c-a027566ebe85" volumeName="kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219452 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31a92270-efed-44fe-871e-90333235e85f" volumeName="kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219469 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5320a1da-262a-4b1b-93b4-1df9d4c26eec" volumeName="kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219485 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4146a62d-e37b-4295-90ca-b23f5e3d1112" volumeName="kubernetes.io/empty-dir/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-textfile" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219502 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fbd379-dd1e-4287-bd76-fd3ec51cde43" volumeName="kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-ca-certs" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219519 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52e32e2d-33ab-4351-ae8a-80acd6077d70" volumeName="kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-utilities" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219535 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9768e50-c883-47b0-b319-851fa53ac19a" volumeName="kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219550 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219565 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06cbd48a-1f1d-4734-8d57-e1b6824879b6" volumeName="kubernetes.io/projected/06cbd48a-1f1d-4734-8d57-e1b6824879b6-kube-api-access-ltlf6" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219580 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219599 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92542f7c-182b-45a8-bbf3-00e99ba7acee" volumeName="kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-catalog-content" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219616 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" volumeName="kubernetes.io/configmap/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-service-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219631 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9768e50-c883-47b0-b319-851fa53ac19a" volumeName="kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219651 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e23989-853e-4b49-ba0f-1961d64ae3a3" volumeName="kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219688 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fbd379-dd1e-4287-bd76-fd3ec51cde43" volumeName="kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219725 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29ba6765-61c9-4f78-8f44-570418000c5c" volumeName="kubernetes.io/projected/29ba6765-61c9-4f78-8f44-570418000c5c-kube-api-access-xchll" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219742 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4" volumeName="kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219757 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" volumeName="kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-stats-auth" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219773 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5f9f50b-e7b4-4b81-864b-349303f21447" volumeName="kubernetes.io/projected/b5f9f50b-e7b4-4b81-864b-349303f21447-kube-api-access-bpj79" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219789 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9768e50-c883-47b0-b319-851fa53ac19a" volumeName="kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219815 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" volumeName="kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219889 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" volumeName="kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219904 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d89af2f-47f5-4ee5-a790-e162c2dee3ce" volumeName="kubernetes.io/configmap/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-service-ca" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219926 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219943 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ccf74af5-d4fd-4ed3-9784-42397ea798c5" volumeName="kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219965 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1794b726-5c0d-4a72-8ddd-418a2cbd8ded" volumeName="kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219981 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4146a62d-e37b-4295-90ca-b23f5e3d1112" volumeName="kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.219997 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" volumeName="kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220020 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c110b293-2c6b-496b-b015-23aada98cb4b" volumeName="kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220059 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8" volumeName="kubernetes.io/projected/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8-kube-api-access-d2bwv" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220082 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" volumeName="kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220111 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65344cd-8571-4a78-927f-eec46ec1af51" volumeName="kubernetes.io/projected/f65344cd-8571-4a78-927f-eec46ec1af51-kube-api-access-djq7n" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220140 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2700f537-8f31-4380-a527-3e697a8122cc" volumeName="kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220168 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5982111d-f4c6-4335-9b40-3142758fc2bc" volumeName="kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220184 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="772bc250-2e57-4ce0-883c-d44281fcb0be" volumeName="kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220199 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5ae1886-f90c-49f4-bf08-055b55dd785a" volumeName="kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220215 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91a6fa86-8c58-43bc-a2d4-2b20901269f7" volumeName="kubernetes.io/empty-dir/91a6fa86-8c58-43bc-a2d4-2b20901269f7-volume-directive-shadow" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220231 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" volumeName="kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220357 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2ade7e6-cecd-4e98-8f85-ea8219303d75" volumeName="kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220393 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4" volumeName="kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220411 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fbd379-dd1e-4287-bd76-fd3ec51cde43" volumeName="kubernetes.io/empty-dir/43fbd379-dd1e-4287-bd76-fd3ec51cde43-cache" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220429 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52e32e2d-33ab-4351-ae8a-80acd6077d70" volumeName="kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-catalog-content" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220445 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" volumeName="kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220461 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5ae1886-f90c-49f4-bf08-055b55dd785a" volumeName="kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220478 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf89a76-7a94-46d3-853e-68e986563764" volumeName="kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220497 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06cbd48a-1f1d-4734-8d57-e1b6824879b6" volumeName="kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220515 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2207df9e-f21e-4c30-98d5-248ae99c245e" volumeName="kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220531 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1794b726-5c0d-4a72-8ddd-418a2cbd8ded" volumeName="kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220553 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="772bc250-2e57-4ce0-883c-d44281fcb0be" volumeName="kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220571 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91a6fa86-8c58-43bc-a2d4-2b20901269f7" volumeName="kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca" seLinuxMountContext="" Mar 18 09:04:07.220495 master-0 kubenswrapper[28766]: I0318 09:04:07.220589 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91a6fa86-8c58-43bc-a2d4-2b20901269f7" volumeName="kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220643 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="939efa41-8f40-4f91-bee4-0425aead9760" volumeName="kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220659 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5ae1886-f90c-49f4-bf08-055b55dd785a" volumeName="kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220675 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edc7f629-4288-443b-aa8e-78bc6a09c848" volumeName="kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220699 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="336e741d-ac9a-4b94-9fbb-c9010e37c2d0" volumeName="kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220725 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e96b35f-c57a-4e01-82f7-894ea16ac5b8" volumeName="kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220743 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="495e0cff-fca8-4dad-9247-2fc0e7ce86fc" volumeName="kubernetes.io/projected/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-kube-api-access-5qrqx" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220759 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b35ab145-16a7-4ef1-86e8-0afb6ff469fd" volumeName="kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220788 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d71aa1b9-6eb5-4331-b959-8930e10817b4" volumeName="kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220805 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc5a9875-d97e-4371-a15d-a1f43b85abce" volumeName="kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220821 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e23989-853e-4b49-ba0f-1961d64ae3a3" volumeName="kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220839 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e23989-853e-4b49-ba0f-1961d64ae3a3" volumeName="kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220975 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a" volumeName="kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.220992 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa8f1797-0219-49fe-82b5-7416cc481c3a" volumeName="kubernetes.io/projected/fa8f1797-0219-49fe-82b5-7416cc481c3a-kube-api-access-njbjp" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221019 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2700f537-8f31-4380-a527-3e697a8122cc" volumeName="kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221034 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5320a1da-262a-4b1b-93b4-1df9d4c26eec" volumeName="kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221048 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" volumeName="kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-metrics-certs" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221077 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6fe8ee6-737e-438a-8d9d-1ec712f6bacf" volumeName="kubernetes.io/secret/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221093 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d633c5-e0aa-4fb6-83e0-a2e976334406" volumeName="kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221109 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91a6fa86-8c58-43bc-a2d4-2b20901269f7" volumeName="kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221124 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7962fb40-1170-4c00-b1bf-92966aeae807" volumeName="kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221140 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e23989-853e-4b49-ba0f-1961d64ae3a3" volumeName="kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221168 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="40f3b7a4-107c-4f1d-a3ab-b5d2309c373b" volumeName="kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221183 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59d50dd5-6793-4f96-a769-31e086ecc7e4" volumeName="kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221204 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" volumeName="kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221220 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc5a9875-d97e-4371-a15d-a1f43b85abce" volumeName="kubernetes.io/projected/fc5a9875-d97e-4371-a15d-a1f43b85abce-kube-api-access-mvlvd" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221236 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="495e0cff-fca8-4dad-9247-2fc0e7ce86fc" volumeName="kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221268 28766 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52e32e2d-33ab-4351-ae8a-80acd6077d70" volumeName="kubernetes.io/projected/52e32e2d-33ab-4351-ae8a-80acd6077d70-kube-api-access-dm6nf" seLinuxMountContext="" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221283 28766 reconstruct.go:97] "Volume reconstruction finished" Mar 18 09:04:07.226540 master-0 kubenswrapper[28766]: I0318 09:04:07.221296 28766 reconciler.go:26] "Reconciler: start to sync state" Mar 18 09:04:07.229511 master-0 kubenswrapper[28766]: I0318 09:04:07.229456 28766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 09:04:07.231474 master-0 kubenswrapper[28766]: I0318 09:04:07.231448 28766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 09:04:07.231530 master-0 kubenswrapper[28766]: I0318 09:04:07.231500 28766 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 09:04:07.231530 master-0 kubenswrapper[28766]: I0318 09:04:07.231523 28766 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 09:04:07.231595 master-0 kubenswrapper[28766]: E0318 09:04:07.231574 28766 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 09:04:07.232996 master-0 kubenswrapper[28766]: W0318 09:04:07.232459 28766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:07.232996 master-0 kubenswrapper[28766]: E0318 09:04:07.232535 28766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:04:07.242609 master-0 kubenswrapper[28766]: I0318 09:04:07.242542 28766 generic.go:334] "Generic (PLEG): container finished" podID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" containerID="44961de8599bb63e15f17ececbcbbdf128ff00606cbb65189b93cdcbe9f41ba2" exitCode=0 Mar 18 09:04:07.247246 master-0 kubenswrapper[28766]: I0318 09:04:07.245830 28766 generic.go:334] "Generic (PLEG): container finished" podID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerID="a4436209a1c80a403c36e67bb8b4310cdae3c04ffc3d3675bb5372419c24b948" exitCode=0 Mar 18 09:04:07.251145 master-0 kubenswrapper[28766]: I0318 09:04:07.251091 28766 generic.go:334] "Generic (PLEG): container finished" podID="b5f9f50b-e7b4-4b81-864b-349303f21447" containerID="589683df05fefda7629bb4e428ec6a4f619c8b88cea31f43af821234a93ed5bc" exitCode=0 Mar 18 09:04:07.254828 master-0 kubenswrapper[28766]: I0318 09:04:07.254787 28766 generic.go:334] "Generic (PLEG): container finished" podID="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" containerID="75d1410d48296cb4f2446dcf35dcfdb58ad3083bc984cecb00db26ae1fc3d758" exitCode=0 Mar 18 09:04:07.273951 master-0 kubenswrapper[28766]: I0318 09:04:07.273795 28766 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285" exitCode=0 Mar 18 09:04:07.276247 master-0 kubenswrapper[28766]: I0318 09:04:07.276196 28766 generic.go:334] "Generic (PLEG): container finished" podID="1ecff6b2-dbd4-4366-873b-2170d0b76c0f" containerID="010b44e43896597007413d73633a4236214230adb7cc7835885b7a52a1e627ab" exitCode=0 Mar 18 09:04:07.281613 master-0 kubenswrapper[28766]: I0318 09:04:07.281570 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 09:04:07.281962 master-0 kubenswrapper[28766]: I0318 09:04:07.281930 28766 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc" exitCode=1 Mar 18 09:04:07.281962 master-0 kubenswrapper[28766]: I0318 09:04:07.281954 28766 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="60b7a6828ff9115f3e360da4ea3b39ddb71f9d86fc37454c4e2b71253e2b011f" exitCode=0 Mar 18 09:04:07.285048 master-0 kubenswrapper[28766]: E0318 09:04:07.285026 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.287099 master-0 kubenswrapper[28766]: I0318 09:04:07.287045 28766 generic.go:334] "Generic (PLEG): container finished" podID="b0280499-8277-46f0-bd8c-058a47a99e19" containerID="76b00b2da24613bfa7eda95194ecd9d40e69d00311f7e279f85c5936ce0d7e4d" exitCode=0 Mar 18 09:04:07.294802 master-0 kubenswrapper[28766]: I0318 09:04:07.294753 28766 generic.go:334] "Generic (PLEG): container finished" podID="939efa41-8f40-4f91-bee4-0425aead9760" containerID="c7bdc6ef2980045954ec06270159082d9f28baec29275922530ef4e26552cf99" exitCode=0 Mar 18 09:04:07.298306 master-0 kubenswrapper[28766]: I0318 09:04:07.298257 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/config-sync-controllers/0.log" Mar 18 09:04:07.299239 master-0 kubenswrapper[28766]: I0318 09:04:07.299209 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/cluster-cloud-controller-manager/0.log" Mar 18 09:04:07.299302 master-0 kubenswrapper[28766]: I0318 09:04:07.299243 28766 generic.go:334] "Generic (PLEG): container finished" podID="ccf74af5-d4fd-4ed3-9784-42397ea798c5" containerID="186b22d65f0d4470eb32e6b82579dc544a089964b2ec507b602aabe9b3c9e6c1" exitCode=1 Mar 18 09:04:07.299302 master-0 kubenswrapper[28766]: I0318 09:04:07.299256 28766 generic.go:334] "Generic (PLEG): container finished" podID="ccf74af5-d4fd-4ed3-9784-42397ea798c5" containerID="eaad38e5e9adf0c7d9032d4d158adc24f0ed091bb2d04b70f67f104373652877" exitCode=1 Mar 18 09:04:07.302336 master-0 kubenswrapper[28766]: I0318 09:04:07.302314 28766 generic.go:334] "Generic (PLEG): container finished" podID="260c8aa5-a288-4ee8-b671-f97e90a2f39c" containerID="42ba60928089ecdd2be6dc0bf250cb571a47fd29cfa3690db6c3f8f43ab0c4ba" exitCode=0 Mar 18 09:04:07.309726 master-0 kubenswrapper[28766]: I0318 09:04:07.309478 28766 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="b90404fea2dcee705335febe9902c2cb9057e6f3ac0a9b235a9e5ecb1660d666" exitCode=0 Mar 18 09:04:07.309726 master-0 kubenswrapper[28766]: I0318 09:04:07.309543 28766 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="5ff838c2d5ef301a4d391cdf94caa10d8ed9cf1ecae148154167ecb368e38ae1" exitCode=0 Mar 18 09:04:07.309726 master-0 kubenswrapper[28766]: I0318 09:04:07.309559 28766 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="2d0a2c2dc41ce3fdaa0eb263dbdcc431c85c8b6b65a032320a020b41e4119800" exitCode=0 Mar 18 09:04:07.309726 master-0 kubenswrapper[28766]: I0318 09:04:07.309576 28766 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="4e7c826e1670b530a9fd33f7eb549f98d247eb166d6206beef67f781b2a470af" exitCode=0 Mar 18 09:04:07.309726 master-0 kubenswrapper[28766]: I0318 09:04:07.309593 28766 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="087da5f6d44511af7f32a791cdbe22a09cb7c15552db037f0bacb605d9163341" exitCode=0 Mar 18 09:04:07.309726 master-0 kubenswrapper[28766]: I0318 09:04:07.309607 28766 generic.go:334] "Generic (PLEG): container finished" podID="f9fa104a-4979-4023-8d7e-a965f11bc7db" containerID="adde235643fbff8c27e9f475aac6b49079f9d822aa89abb8fde8b8cfe9cfc68c" exitCode=0 Mar 18 09:04:07.312328 master-0 kubenswrapper[28766]: I0318 09:04:07.312302 28766 generic.go:334] "Generic (PLEG): container finished" podID="a268d595-18c2-43a2-8ed5-eb64c76c490f" containerID="42f23b18ac970e3da9687bbb84eb7ea3c73aad4f1a6ef5df47db5bc94e10804e" exitCode=0 Mar 18 09:04:07.312403 master-0 kubenswrapper[28766]: I0318 09:04:07.312328 28766 generic.go:334] "Generic (PLEG): container finished" podID="a268d595-18c2-43a2-8ed5-eb64c76c490f" containerID="4e6504c0fa849fb56cf305c3b2b7aa1db21a051c51fa14d99a8ddcac1a32ab11" exitCode=0 Mar 18 09:04:07.326239 master-0 kubenswrapper[28766]: I0318 09:04:07.326181 28766 generic.go:334] "Generic (PLEG): container finished" podID="2207df9e-f21e-4c30-98d5-248ae99c245e" containerID="4ab7ce18ff8c455a08cc88d97fdc9cc8dc555138a8a11da35cc907f8c6e70d0d" exitCode=0 Mar 18 09:04:07.331020 master-0 kubenswrapper[28766]: I0318 09:04:07.330921 28766 generic.go:334] "Generic (PLEG): container finished" podID="ec11012b-536a-422f-afc4-d2d0fd4b67fb" containerID="b192c774019baaa7e62a2cf9e287d09d05206c3fc1c24b73874462681a8ac04f" exitCode=0 Mar 18 09:04:07.332599 master-0 kubenswrapper[28766]: E0318 09:04:07.331682 28766 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 09:04:07.336436 master-0 kubenswrapper[28766]: I0318 09:04:07.336384 28766 generic.go:334] "Generic (PLEG): container finished" podID="92542f7c-182b-45a8-bbf3-00e99ba7acee" containerID="59ae026604cd04ce353fa378aa4e158633279c635c9ea30620458e2ad2301dcf" exitCode=0 Mar 18 09:04:07.336436 master-0 kubenswrapper[28766]: I0318 09:04:07.336428 28766 generic.go:334] "Generic (PLEG): container finished" podID="92542f7c-182b-45a8-bbf3-00e99ba7acee" containerID="83cd147764ec185f1c61933eb40e43bfd7feace1c1937bc4d75f521b8846c76e" exitCode=0 Mar 18 09:04:07.340965 master-0 kubenswrapper[28766]: I0318 09:04:07.340917 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-n5vqx_16d633c5-e0aa-4fb6-83e0-a2e976334406/approver/1.log" Mar 18 09:04:07.341446 master-0 kubenswrapper[28766]: I0318 09:04:07.341379 28766 generic.go:334] "Generic (PLEG): container finished" podID="16d633c5-e0aa-4fb6-83e0-a2e976334406" containerID="fc1e7d5ba53f64b05a03f60a1cf7fc1f9339f4be3d65c717cb0541eb9f2e16d3" exitCode=1 Mar 18 09:04:07.348185 master-0 kubenswrapper[28766]: I0318 09:04:07.348121 28766 generic.go:334] "Generic (PLEG): container finished" podID="62a1fcda-ce2f-4d14-ab37-10a21e30fc30" containerID="08088f866063b071982a4841fdee97faaded7e31cf8cc32d7754eb48aa28135c" exitCode=0 Mar 18 09:04:07.350766 master-0 kubenswrapper[28766]: I0318 09:04:07.350708 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-87vpl_495e0cff-fca8-4dad-9247-2fc0e7ce86fc/machine-approver-controller/0.log" Mar 18 09:04:07.351253 master-0 kubenswrapper[28766]: I0318 09:04:07.351196 28766 generic.go:334] "Generic (PLEG): container finished" podID="495e0cff-fca8-4dad-9247-2fc0e7ce86fc" containerID="482a2a455c91ae8f75a1b491f54c3f841099d7f9c064cccb7d26f482c03b17d7" exitCode=255 Mar 18 09:04:07.356160 master-0 kubenswrapper[28766]: I0318 09:04:07.356010 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-z9n9c_d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/control-plane-machine-set-operator/0.log" Mar 18 09:04:07.356160 master-0 kubenswrapper[28766]: I0318 09:04:07.356083 28766 generic.go:334] "Generic (PLEG): container finished" podID="d6fe8ee6-737e-438a-8d9d-1ec712f6bacf" containerID="0fd3855d3d4e49dbbbd6fbd3a0b7de23ed78bc7af2b1a5b78f4de3c1bee51d0a" exitCode=1 Mar 18 09:04:07.359283 master-0 kubenswrapper[28766]: I0318 09:04:07.359192 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/3.log" Mar 18 09:04:07.359762 master-0 kubenswrapper[28766]: I0318 09:04:07.359715 28766 generic.go:334] "Generic (PLEG): container finished" podID="573d3a02-e395-4816-963a-cd614ef53f75" containerID="8a367d5d8cce98fe512b83ef657e1c2b1d37fead9ef4db0e545e39ebb8df8515" exitCode=255 Mar 18 09:04:07.359762 master-0 kubenswrapper[28766]: I0318 09:04:07.359757 28766 generic.go:334] "Generic (PLEG): container finished" podID="573d3a02-e395-4816-963a-cd614ef53f75" containerID="e51fa0342ef2eca22478ce0380d3cd4446fad9cc3cda5d0c285a70b4c9b5167e" exitCode=0 Mar 18 09:04:07.377843 master-0 kubenswrapper[28766]: I0318 09:04:07.377766 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-chjqr_33a5c021-23c3-4a97-b5f3-77fd6dcba1ab/manager/1.log" Mar 18 09:04:07.378474 master-0 kubenswrapper[28766]: I0318 09:04:07.378417 28766 generic.go:334] "Generic (PLEG): container finished" podID="33a5c021-23c3-4a97-b5f3-77fd6dcba1ab" containerID="93249f7db2dc0c3a5b0fe1351b49e56d1937b973c4c8c817cae063e4b26914a3" exitCode=1 Mar 18 09:04:07.384764 master-0 kubenswrapper[28766]: I0318 09:04:07.384709 28766 generic.go:334] "Generic (PLEG): container finished" podID="005a0b4c-8e2d-4483-98e9-55badf7099c5" containerID="83c1b5b71c6b991cce706c7d71cc023db485e610df2dae94288a380e76fcfca1" exitCode=0 Mar 18 09:04:07.385762 master-0 kubenswrapper[28766]: E0318 09:04:07.385731 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.386960 master-0 kubenswrapper[28766]: I0318 09:04:07.386926 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/3.log" Mar 18 09:04:07.387060 master-0 kubenswrapper[28766]: I0318 09:04:07.386976 28766 generic.go:334] "Generic (PLEG): container finished" podID="29ba6765-61c9-4f78-8f44-570418000c5c" containerID="eb8c2b58df79128dda5fbfc30648d542ddf26d01723e229dbbb1234b6cbc0067" exitCode=1 Mar 18 09:04:07.389290 master-0 kubenswrapper[28766]: I0318 09:04:07.389251 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1edfa49b-d0e7-4324-aace-b115b41ddae0/installer/0.log" Mar 18 09:04:07.389358 master-0 kubenswrapper[28766]: I0318 09:04:07.389310 28766 generic.go:334] "Generic (PLEG): container finished" podID="1edfa49b-d0e7-4324-aace-b115b41ddae0" containerID="91060a1df8ac508bd63d3fe87c3026c13bbc60c7a49e9b85f1b8ff384fcdd40b" exitCode=1 Mar 18 09:04:07.392201 master-0 kubenswrapper[28766]: E0318 09:04:07.392158 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 09:04:07.400328 master-0 kubenswrapper[28766]: I0318 09:04:07.400241 28766 generic.go:334] "Generic (PLEG): container finished" podID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerID="c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882" exitCode=0 Mar 18 09:04:07.405141 master-0 kubenswrapper[28766]: I0318 09:04:07.405097 28766 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="8d9361279d59d84f68c69450e42602da65b59d791ddc81fa0875ca16322aadf2" exitCode=0 Mar 18 09:04:07.405141 master-0 kubenswrapper[28766]: I0318 09:04:07.405139 28766 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="070d05778f03eb8121f42051c1852470fb61e1c95f54e85ee0be41826b2301b3" exitCode=0 Mar 18 09:04:07.405272 master-0 kubenswrapper[28766]: I0318 09:04:07.405149 28766 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="651e82575789e45afdb3cab141808fa3f37d722ac54ebc209361597ebc814204" exitCode=0 Mar 18 09:04:07.411090 master-0 kubenswrapper[28766]: I0318 09:04:07.411002 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-r758j_772bc250-2e57-4ce0-883c-d44281fcb0be/openshift-controller-manager-operator/0.log" Mar 18 09:04:07.411090 master-0 kubenswrapper[28766]: I0318 09:04:07.411057 28766 generic.go:334] "Generic (PLEG): container finished" podID="772bc250-2e57-4ce0-883c-d44281fcb0be" containerID="fb1d8cdaae1091b519c657021dc4e61ba66eba83ec8f94dd444327353dc0ffc0" exitCode=1 Mar 18 09:04:07.413516 master-0 kubenswrapper[28766]: I0318 09:04:07.413423 28766 generic.go:334] "Generic (PLEG): container finished" podID="e0d127be-2d13-449b-915b-2d49052baf02" containerID="d6df90fd64794ccde6d9875bd568053d6569144302c72ab9173cf35f762dfd22" exitCode=0 Mar 18 09:04:07.417805 master-0 kubenswrapper[28766]: I0318 09:04:07.417782 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/1.log" Mar 18 09:04:07.418230 master-0 kubenswrapper[28766]: I0318 09:04:07.418184 28766 generic.go:334] "Generic (PLEG): container finished" podID="97730ec2-e6f1-4f8c-b85c-3c10623d06ce" containerID="ba57860bb4615dc613e8795f7f3436663ef867da7a5a525958b65d7222c4b23f" exitCode=1 Mar 18 09:04:07.420324 master-0 kubenswrapper[28766]: I0318 09:04:07.420293 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/1.log" Mar 18 09:04:07.420686 master-0 kubenswrapper[28766]: I0318 09:04:07.420665 28766 generic.go:334] "Generic (PLEG): container finished" podID="43fbd379-dd1e-4287-bd76-fd3ec51cde43" containerID="55bd80bc1088dec062336fd1b1d85e5a9546eaf4e05088f85819a8147a8e19b3" exitCode=1 Mar 18 09:04:07.422138 master-0 kubenswrapper[28766]: I0318 09:04:07.422118 28766 generic.go:334] "Generic (PLEG): container finished" podID="3068e569-5a4e-4fc3-88f4-5684d093c8e6" containerID="54302cdad4a743df0858f296cab89bada38f903f22c51e9048d06d7146e16775" exitCode=0 Mar 18 09:04:07.427992 master-0 kubenswrapper[28766]: I0318 09:04:07.427946 28766 generic.go:334] "Generic (PLEG): container finished" podID="edc7f629-4288-443b-aa8e-78bc6a09c848" containerID="4baf438f84441de9a2ddd79dfbe1c9dc6b19f232a4b6153cb8db1151df46918a" exitCode=0 Mar 18 09:04:07.429784 master-0 kubenswrapper[28766]: I0318 09:04:07.429745 28766 generic.go:334] "Generic (PLEG): container finished" podID="8a6ab2be-d018-4fd5-bfbb-6b88aec28663" containerID="5e84b000c1316fb6659579cb173f67777226d532d34aa25b987bd230e2ca4fb7" exitCode=0 Mar 18 09:04:07.435232 master-0 kubenswrapper[28766]: I0318 09:04:07.435172 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-5r5r4_07a4fd92-0fd1-4688-b2db-de615d75971e/network-operator/0.log" Mar 18 09:04:07.435232 master-0 kubenswrapper[28766]: I0318 09:04:07.435203 28766 generic.go:334] "Generic (PLEG): container finished" podID="07a4fd92-0fd1-4688-b2db-de615d75971e" containerID="20bac68a3a787cd3ab838f8bf47eee1e23fd920610fa248db61e044af450ce49" exitCode=255 Mar 18 09:04:07.439933 master-0 kubenswrapper[28766]: I0318 09:04:07.439877 28766 generic.go:334] "Generic (PLEG): container finished" podID="e2ade7e6-cecd-4e98-8f85-ea8219303d75" containerID="77402342b68e7cb4ec7ebd972b9ac7442e45f3236ab9cfbb373363dfbf591b0c" exitCode=0 Mar 18 09:04:07.439933 master-0 kubenswrapper[28766]: I0318 09:04:07.439927 28766 generic.go:334] "Generic (PLEG): container finished" podID="e2ade7e6-cecd-4e98-8f85-ea8219303d75" containerID="2966e21e324cf74e9b19c0ead035010d27be318a44ea8cb0c4864e39d4076171" exitCode=0 Mar 18 09:04:07.440054 master-0 kubenswrapper[28766]: I0318 09:04:07.439939 28766 generic.go:334] "Generic (PLEG): container finished" podID="e2ade7e6-cecd-4e98-8f85-ea8219303d75" containerID="23c3d665afaf3cc37466eca134b1f313b3fb9bff8fd0cf090f0e4b47784dbfda" exitCode=0 Mar 18 09:04:07.442704 master-0 kubenswrapper[28766]: I0318 09:04:07.442662 28766 generic.go:334] "Generic (PLEG): container finished" podID="f65344cd-8571-4a78-927f-eec46ec1af51" containerID="763ae2339eb63a918ff19ddcb00ca5fa223a5d7c07aecf5c680ab374869c6485" exitCode=0 Mar 18 09:04:07.442704 master-0 kubenswrapper[28766]: I0318 09:04:07.442698 28766 generic.go:334] "Generic (PLEG): container finished" podID="f65344cd-8571-4a78-927f-eec46ec1af51" containerID="74d7e74934812b2b075e232eef44fa1c57bdc06f53f3181da801a35e02650482" exitCode=0 Mar 18 09:04:07.445034 master-0 kubenswrapper[28766]: I0318 09:04:07.444994 28766 generic.go:334] "Generic (PLEG): container finished" podID="fcf89a76-7a94-46d3-853e-68e986563764" containerID="cc2fad03c96d37b754988a128065f6939d46f7a48a89eb78a7b395dfd2147290" exitCode=0 Mar 18 09:04:07.447630 master-0 kubenswrapper[28766]: I0318 09:04:07.447596 28766 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="b0564925d47f5840821e3c795a9cfcae45b42d4975ada3f3aedc3639ab59cfb5" exitCode=0 Mar 18 09:04:07.447630 master-0 kubenswrapper[28766]: I0318 09:04:07.447621 28766 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="f2d4d2d49e0c856fff93c30b0d719c8529754ea148952a7ef6bb3db593f16a16" exitCode=0 Mar 18 09:04:07.449200 master-0 kubenswrapper[28766]: I0318 09:04:07.449172 28766 generic.go:334] "Generic (PLEG): container finished" podID="0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" containerID="4831da6b8225ffa3b61ecb0f1ce7047144ac489e1e26b31e6165fbfd478f3144" exitCode=0 Mar 18 09:04:07.451005 master-0 kubenswrapper[28766]: I0318 09:04:07.450947 28766 generic.go:334] "Generic (PLEG): container finished" podID="97215428-2d5d-460f-947c-f2a490bc428d" containerID="af45d378024ee7c220ba697e8109094cfb054515091d9efd5c22113a8f02ec12" exitCode=0 Mar 18 09:04:07.453763 master-0 kubenswrapper[28766]: I0318 09:04:07.453727 28766 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="83c47aaabc2b561d44e630d0889d72720d976ad68c17142beae85f320c2926a1" exitCode=0 Mar 18 09:04:07.453763 master-0 kubenswrapper[28766]: I0318 09:04:07.453750 28766 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="20f67081f1a83df8fa8825fe68b2011f445e7f6dd6a012bd23cbd198b1272dee" exitCode=0 Mar 18 09:04:07.453763 master-0 kubenswrapper[28766]: I0318 09:04:07.453757 28766 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="9e36a51bcf12ae7db2a94f2fd56063ee6085dd854239e6802000e5e8cda9a85b" exitCode=0 Mar 18 09:04:07.453763 master-0 kubenswrapper[28766]: I0318 09:04:07.453766 28766 generic.go:334] "Generic (PLEG): container finished" podID="c229b92d307e46237f6273edcc98d387" containerID="5c751dbb03b0e78f3ed7a9a2441228c32321443d29de48b1bf17ef0e83072bd3" exitCode=2 Mar 18 09:04:07.456071 master-0 kubenswrapper[28766]: I0318 09:04:07.456047 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/4.log" Mar 18 09:04:07.456455 master-0 kubenswrapper[28766]: I0318 09:04:07.456424 28766 generic.go:334] "Generic (PLEG): container finished" podID="94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9" containerID="fe944915d18e348bfb79682afadf9c6819f22fab134c6c6c62f0a35f31f26a1f" exitCode=1 Mar 18 09:04:07.458824 master-0 kubenswrapper[28766]: I0318 09:04:07.458801 28766 generic.go:334] "Generic (PLEG): container finished" podID="52e32e2d-33ab-4351-ae8a-80acd6077d70" containerID="570abd7afd841c39fdf3ec02f6786671fcd82b78141a177d4622bd38088a5759" exitCode=0 Mar 18 09:04:07.458824 master-0 kubenswrapper[28766]: I0318 09:04:07.458823 28766 generic.go:334] "Generic (PLEG): container finished" podID="52e32e2d-33ab-4351-ae8a-80acd6077d70" containerID="f1681da17a74338c034d7dc91920cd7fa391334049c5dee2d2d6586f7e2d97b5" exitCode=0 Mar 18 09:04:07.470448 master-0 kubenswrapper[28766]: I0318 09:04:07.470403 28766 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="965c96bceffdf0d2dfe6811ad54d4d08d2afc86948c8800b709c2385cc93d84e" exitCode=0 Mar 18 09:04:07.472952 master-0 kubenswrapper[28766]: I0318 09:04:07.472919 28766 generic.go:334] "Generic (PLEG): container finished" podID="11a2f93448b9d54da9854663936e2b73" containerID="8518fd5fa5f57002df2dc9e0199a7271feebc95e929446acfa8563e63e176f72" exitCode=0 Mar 18 09:04:07.476299 master-0 kubenswrapper[28766]: I0318 09:04:07.476261 28766 generic.go:334] "Generic (PLEG): container finished" podID="4146a62d-e37b-4295-90ca-b23f5e3d1112" containerID="2fc99621e6e4ad392bd150a56b2542828a2fbbced942d108f4ee62997bcb92eb" exitCode=0 Mar 18 09:04:07.485908 master-0 kubenswrapper[28766]: E0318 09:04:07.485864 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.489402 master-0 kubenswrapper[28766]: I0318 09:04:07.489367 28766 generic.go:334] "Generic (PLEG): container finished" podID="c110b293-2c6b-496b-b015-23aada98cb4b" containerID="851a9b4a39c1a238b36e5625cadf0309e8c60fabaa4ea940ca6a7ae0197a27fb" exitCode=0 Mar 18 09:04:07.501775 master-0 kubenswrapper[28766]: I0318 09:04:07.500807 28766 generic.go:334] "Generic (PLEG): container finished" podID="5982111d-f4c6-4335-9b40-3142758fc2bc" containerID="9375c67121087e2f83dd2c8b94c0ff17721fa9588235ead108bb8a1e451225b5" exitCode=0 Mar 18 09:04:07.504193 master-0 kubenswrapper[28766]: I0318 09:04:07.503899 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c6fb9336-3f19-4220-93ee-a5a61e26340b/installer/0.log" Mar 18 09:04:07.504300 master-0 kubenswrapper[28766]: I0318 09:04:07.504219 28766 generic.go:334] "Generic (PLEG): container finished" podID="c6fb9336-3f19-4220-93ee-a5a61e26340b" containerID="a0811de98d66913ef78505cbfb268009b3b82b021cf08be06bcac5fba5f9e228" exitCode=1 Mar 18 09:04:07.514445 master-0 kubenswrapper[28766]: I0318 09:04:07.514390 28766 generic.go:334] "Generic (PLEG): container finished" podID="2700f537-8f31-4380-a527-3e697a8122cc" containerID="fa4ea33fa46744eacabcd0bcd52fb003649aa1cd4700008b10cea57f832bf122" exitCode=0 Mar 18 09:04:07.516435 master-0 kubenswrapper[28766]: I0318 09:04:07.516394 28766 generic.go:334] "Generic (PLEG): container finished" podID="28d2bb97-ff93-4772-96fd-318fa62e3a87" containerID="cf9e9bddbf3499401835a2ff896142cd9409d0448e901ff2faa3c5fb21f85146" exitCode=0 Mar 18 09:04:07.533418 master-0 kubenswrapper[28766]: E0318 09:04:07.533384 28766 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 09:04:07.586038 master-0 kubenswrapper[28766]: E0318 09:04:07.586002 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.686644 master-0 kubenswrapper[28766]: E0318 09:04:07.686461 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.786756 master-0 kubenswrapper[28766]: E0318 09:04:07.786681 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.794129 master-0 kubenswrapper[28766]: E0318 09:04:07.794050 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 09:04:07.887486 master-0 kubenswrapper[28766]: E0318 09:04:07.887390 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.933685 master-0 kubenswrapper[28766]: E0318 09:04:07.933643 28766 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 09:04:07.988251 master-0 kubenswrapper[28766]: E0318 09:04:07.988208 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:07.998928 master-0 kubenswrapper[28766]: W0318 09:04:07.998767 28766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:07.999063 master-0 kubenswrapper[28766]: E0318 09:04:07.998975 28766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:04:08.016994 master-0 kubenswrapper[28766]: W0318 09:04:08.016838 28766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:08.017154 master-0 kubenswrapper[28766]: E0318 09:04:08.017043 28766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:04:08.090403 master-0 kubenswrapper[28766]: E0318 09:04:08.090319 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:08.169605 master-0 kubenswrapper[28766]: I0318 09:04:08.168779 28766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:08.191876 master-0 kubenswrapper[28766]: E0318 09:04:08.191362 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:08.291591 master-0 kubenswrapper[28766]: E0318 09:04:08.291474 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:08.314136 master-0 kubenswrapper[28766]: W0318 09:04:08.314046 28766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:08.314401 master-0 kubenswrapper[28766]: E0318 09:04:08.314146 28766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:04:08.391682 master-0 kubenswrapper[28766]: E0318 09:04:08.391642 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:08.463571 master-0 kubenswrapper[28766]: I0318 09:04:08.463528 28766 manager.go:324] Recovery completed Mar 18 09:04:08.492167 master-0 kubenswrapper[28766]: E0318 09:04:08.492097 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:08.528029 master-0 kubenswrapper[28766]: I0318 09:04:08.527963 28766 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="5ec3e7108eee8c08ca66f6f618d1955dea098f10f4832f7e925bd7f46bce001f" exitCode=0 Mar 18 09:04:08.569460 master-0 kubenswrapper[28766]: I0318 09:04:08.569344 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.571970 master-0 kubenswrapper[28766]: I0318 09:04:08.571912 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.572034 master-0 kubenswrapper[28766]: I0318 09:04:08.571980 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.572034 master-0 kubenswrapper[28766]: I0318 09:04:08.571995 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.579956 master-0 kubenswrapper[28766]: I0318 09:04:08.579841 28766 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 09:04:08.579956 master-0 kubenswrapper[28766]: I0318 09:04:08.579979 28766 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 09:04:08.579956 master-0 kubenswrapper[28766]: I0318 09:04:08.580024 28766 state_mem.go:36] "Initialized new in-memory state store" Mar 18 09:04:08.580278 master-0 kubenswrapper[28766]: I0318 09:04:08.580257 28766 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 09:04:08.580314 master-0 kubenswrapper[28766]: I0318 09:04:08.580281 28766 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 09:04:08.580314 master-0 kubenswrapper[28766]: I0318 09:04:08.580310 28766 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 09:04:08.580375 master-0 kubenswrapper[28766]: I0318 09:04:08.580320 28766 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 09:04:08.580375 master-0 kubenswrapper[28766]: I0318 09:04:08.580331 28766 policy_none.go:49] "None policy: Start" Mar 18 09:04:08.586580 master-0 kubenswrapper[28766]: I0318 09:04:08.586537 28766 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 09:04:08.586647 master-0 kubenswrapper[28766]: I0318 09:04:08.586606 28766 state_mem.go:35] "Initializing new in-memory state store" Mar 18 09:04:08.587081 master-0 kubenswrapper[28766]: I0318 09:04:08.587060 28766 state_mem.go:75] "Updated machine memory state" Mar 18 09:04:08.587123 master-0 kubenswrapper[28766]: I0318 09:04:08.587086 28766 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 09:04:08.592629 master-0 kubenswrapper[28766]: E0318 09:04:08.592587 28766 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:04:08.596310 master-0 kubenswrapper[28766]: E0318 09:04:08.596224 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 09:04:08.604049 master-0 kubenswrapper[28766]: I0318 09:04:08.604033 28766 manager.go:334] "Starting Device Plugin manager" Mar 18 09:04:08.604178 master-0 kubenswrapper[28766]: I0318 09:04:08.604166 28766 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 09:04:08.604241 master-0 kubenswrapper[28766]: I0318 09:04:08.604231 28766 server.go:79] "Starting device plugin registration server" Mar 18 09:04:08.604738 master-0 kubenswrapper[28766]: I0318 09:04:08.604726 28766 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 09:04:08.604833 master-0 kubenswrapper[28766]: I0318 09:04:08.604798 28766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 09:04:08.605047 master-0 kubenswrapper[28766]: I0318 09:04:08.605004 28766 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 09:04:08.605230 master-0 kubenswrapper[28766]: I0318 09:04:08.605202 28766 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 09:04:08.605230 master-0 kubenswrapper[28766]: I0318 09:04:08.605220 28766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 09:04:08.620068 master-0 kubenswrapper[28766]: E0318 09:04:08.620007 28766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 09:04:08.680656 master-0 kubenswrapper[28766]: E0318 09:04:08.680459 28766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de41e5423e52b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:04:07.165633835 +0000 UTC m=+0.179892511,LastTimestamp:2026-03-18 09:04:07.165633835 +0000 UTC m=+0.179892511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:04:08.706020 master-0 kubenswrapper[28766]: I0318 09:04:08.705960 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.708482 master-0 kubenswrapper[28766]: I0318 09:04:08.708411 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.708482 master-0 kubenswrapper[28766]: I0318 09:04:08.708443 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.708482 master-0 kubenswrapper[28766]: I0318 09:04:08.708455 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.708482 master-0 kubenswrapper[28766]: I0318 09:04:08.708477 28766 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:04:08.709396 master-0 kubenswrapper[28766]: E0318 09:04:08.709338 28766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:04:08.710234 master-0 kubenswrapper[28766]: W0318 09:04:08.710133 28766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:08.710333 master-0 kubenswrapper[28766]: E0318 09:04:08.710250 28766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 09:04:08.734955 master-0 kubenswrapper[28766]: I0318 09:04:08.734764 28766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:04:08.735083 master-0 kubenswrapper[28766]: I0318 09:04:08.735049 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.739002 master-0 kubenswrapper[28766]: I0318 09:04:08.738943 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.739065 master-0 kubenswrapper[28766]: I0318 09:04:08.739028 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.739065 master-0 kubenswrapper[28766]: I0318 09:04:08.739049 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.739242 master-0 kubenswrapper[28766]: I0318 09:04:08.739211 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.739470 master-0 kubenswrapper[28766]: I0318 09:04:08.739425 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.743764 master-0 kubenswrapper[28766]: I0318 09:04:08.743724 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.743831 master-0 kubenswrapper[28766]: I0318 09:04:08.743767 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.743831 master-0 kubenswrapper[28766]: I0318 09:04:08.743786 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.743928 master-0 kubenswrapper[28766]: I0318 09:04:08.743830 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.743928 master-0 kubenswrapper[28766]: I0318 09:04:08.743883 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.743928 master-0 kubenswrapper[28766]: I0318 09:04:08.743900 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.744076 master-0 kubenswrapper[28766]: I0318 09:04:08.744028 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.744233 master-0 kubenswrapper[28766]: I0318 09:04:08.744203 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.746691 master-0 kubenswrapper[28766]: I0318 09:04:08.746662 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.746691 master-0 kubenswrapper[28766]: I0318 09:04:08.746686 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.746774 master-0 kubenswrapper[28766]: I0318 09:04:08.746696 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.746774 master-0 kubenswrapper[28766]: I0318 09:04:08.746772 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.746901 master-0 kubenswrapper[28766]: I0318 09:04:08.746770 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.746956 master-0 kubenswrapper[28766]: I0318 09:04:08.746920 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.746956 master-0 kubenswrapper[28766]: I0318 09:04:08.746935 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.747020 master-0 kubenswrapper[28766]: I0318 09:04:08.746967 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.749531 master-0 kubenswrapper[28766]: I0318 09:04:08.749496 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.749591 master-0 kubenswrapper[28766]: I0318 09:04:08.749544 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.749591 master-0 kubenswrapper[28766]: I0318 09:04:08.749560 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.749732 master-0 kubenswrapper[28766]: I0318 09:04:08.749709 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.749835 master-0 kubenswrapper[28766]: I0318 09:04:08.749816 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.750378 master-0 kubenswrapper[28766]: I0318 09:04:08.750364 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.750839 master-0 kubenswrapper[28766]: I0318 09:04:08.750647 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.750975 master-0 kubenswrapper[28766]: I0318 09:04:08.750961 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.758705 master-0 kubenswrapper[28766]: I0318 09:04:08.758659 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.758889 master-0 kubenswrapper[28766]: I0318 09:04:08.758728 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.758889 master-0 kubenswrapper[28766]: I0318 09:04:08.758665 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.758889 master-0 kubenswrapper[28766]: I0318 09:04:08.758745 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.759079 master-0 kubenswrapper[28766]: I0318 09:04:08.758773 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.759152 master-0 kubenswrapper[28766]: I0318 09:04:08.759140 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.759398 master-0 kubenswrapper[28766]: I0318 09:04:08.759383 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.759678 master-0 kubenswrapper[28766]: I0318 09:04:08.759630 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.762938 master-0 kubenswrapper[28766]: I0318 09:04:08.762874 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.763019 master-0 kubenswrapper[28766]: I0318 09:04:08.762949 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.763019 master-0 kubenswrapper[28766]: I0318 09:04:08.762963 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.763194 master-0 kubenswrapper[28766]: I0318 09:04:08.763169 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="898f7ad0780d754bd2a9eb084988e2a8df18f477faf934c2f22dfd1716e45de9" Mar 18 09:04:08.763358 master-0 kubenswrapper[28766]: I0318 09:04:08.763213 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"3fca4409620121a7f43cfb37414e381868422175702286563fa7900f579aad87"} Mar 18 09:04:08.763404 master-0 kubenswrapper[28766]: I0318 09:04:08.763358 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"32c5cad9d5ce7a6a9868e1321b49281ebb4f7769c90afec706cbbbe9a7cdbdd6"} Mar 18 09:04:08.763404 master-0 kubenswrapper[28766]: I0318 09:04:08.763377 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.763466 master-0 kubenswrapper[28766]: I0318 09:04:08.763406 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285"} Mar 18 09:04:08.763466 master-0 kubenswrapper[28766]: I0318 09:04:08.763425 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"4dced598bcd2040f1c605c245256a2161b2f459ac4faa81c6af5275d4099b859"} Mar 18 09:04:08.763466 master-0 kubenswrapper[28766]: I0318 09:04:08.763443 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff5a62c6fe250b627c150b3ba60d6fe2a04d4b96c22543f1ae21c885d156295" Mar 18 09:04:08.763557 master-0 kubenswrapper[28766]: I0318 09:04:08.763469 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"128a5d65976993628d981fee7385d5588c74fc7f9ab0a6e9bb3f72584d42ed3d"} Mar 18 09:04:08.763557 master-0 kubenswrapper[28766]: I0318 09:04:08.763487 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"65e224202ac926a558f67bd7907be94c9b8d61e87724e521620bd2b30bc9d0dc"} Mar 18 09:04:08.763557 master-0 kubenswrapper[28766]: I0318 09:04:08.763501 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"60b7a6828ff9115f3e360da4ea3b39ddb71f9d86fc37454c4e2b71253e2b011f"} Mar 18 09:04:08.763557 master-0 kubenswrapper[28766]: I0318 09:04:08.763513 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"65a818ad31dbd4fa7bc3752867fcfb68d605bd15a5390e756d551630b2da7bfb"} Mar 18 09:04:08.763666 master-0 kubenswrapper[28766]: I0318 09:04:08.763654 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13017e08077deeefc07c7fe44f54a64a8b6b49173dc26b6f0df3026587c8b3ff" Mar 18 09:04:08.763764 master-0 kubenswrapper[28766]: I0318 09:04:08.763718 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.763810 master-0 kubenswrapper[28766]: I0318 09:04:08.763785 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.763810 master-0 kubenswrapper[28766]: I0318 09:04:08.763800 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.763947 master-0 kubenswrapper[28766]: I0318 09:04:08.763726 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ceebd5fc2e20325f9aee4b93a902553c4a60d97de2a44d71188013bb71ab91c" Mar 18 09:04:08.764017 master-0 kubenswrapper[28766]: I0318 09:04:08.763987 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be0a7a0ac0aa5258d96034f680e2106c4672594f5322381bd2ce5d9a5f255068" Mar 18 09:04:08.764108 master-0 kubenswrapper[28766]: I0318 09:04:08.764054 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"ec078e5fb5c6af91fa9756d663010f378e1c2f5cbae267347ef882fcddb85660"} Mar 18 09:04:08.764145 master-0 kubenswrapper[28766]: I0318 09:04:08.764122 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"fea5c61028d6f5a8c5c0e3c0cf483e32008841fc099a5bd1b2de142c89560c9b"} Mar 18 09:04:08.764189 master-0 kubenswrapper[28766]: I0318 09:04:08.764146 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"2367b625367ed8557fb256a68af6cdc71a881e71bc9abf0a04640ca6a4bbcdc8"} Mar 18 09:04:08.764189 master-0 kubenswrapper[28766]: I0318 09:04:08.764165 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"bb218c54057c5adaf7c587bdc57fb89f6a61886040b1c8a6b6b58d51f19f2738"} Mar 18 09:04:08.764296 master-0 kubenswrapper[28766]: I0318 09:04:08.764260 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"112a95a0ecbb7e902166f830971fb87997d7e03daddc43d6c1037eba7ffe50d4"} Mar 18 09:04:08.764345 master-0 kubenswrapper[28766]: I0318 09:04:08.764308 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"8d9361279d59d84f68c69450e42602da65b59d791ddc81fa0875ca16322aadf2"} Mar 18 09:04:08.764379 master-0 kubenswrapper[28766]: I0318 09:04:08.764349 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"070d05778f03eb8121f42051c1852470fb61e1c95f54e85ee0be41826b2301b3"} Mar 18 09:04:08.764379 master-0 kubenswrapper[28766]: I0318 09:04:08.764373 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"651e82575789e45afdb3cab141808fa3f37d722ac54ebc209361597ebc814204"} Mar 18 09:04:08.764442 master-0 kubenswrapper[28766]: I0318 09:04:08.764392 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"32faaf71e97855a1cb6aa3bd19d52c689531407fd638810606403df329a94675"} Mar 18 09:04:08.764442 master-0 kubenswrapper[28766]: I0318 09:04:08.764433 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a"} Mar 18 09:04:08.764503 master-0 kubenswrapper[28766]: I0318 09:04:08.764453 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13"} Mar 18 09:04:08.764503 master-0 kubenswrapper[28766]: I0318 09:04:08.764472 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d"} Mar 18 09:04:08.764503 master-0 kubenswrapper[28766]: I0318 09:04:08.764491 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2"} Mar 18 09:04:08.764587 master-0 kubenswrapper[28766]: I0318 09:04:08.764510 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"78bf827b88ee656669c068d855b66ac1c4ec3fa61f0cd2ad36e3510f8a53aa74"} Mar 18 09:04:08.764587 master-0 kubenswrapper[28766]: I0318 09:04:08.764556 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564cb8426369721ba7067b6ba1d2db58be0d2b7219cd8ee2b9c066b14b29b589" Mar 18 09:04:08.764687 master-0 kubenswrapper[28766]: I0318 09:04:08.764665 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="441a9e7e4c0388348d1f8c78cfabb9e80774ef9142ffdc40381f1188cdfe4527" Mar 18 09:04:08.764724 master-0 kubenswrapper[28766]: I0318 09:04:08.764695 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c86f0daa1af8b571957ffb1df5a750b21d97fe93761c60692060e0a17515fcbd" Mar 18 09:04:08.764756 master-0 kubenswrapper[28766]: I0318 09:04:08.764725 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86fa4125270c3c49a4a19e870a994342691ddd1c81df5fef0113e7b2940e9561" Mar 18 09:04:08.764798 master-0 kubenswrapper[28766]: I0318 09:04:08.764779 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dda73eca8049d85d927941d52bde4240cdb56ba2b8f10407c2247ac72190f9f1" Mar 18 09:04:08.764831 master-0 kubenswrapper[28766]: I0318 09:04:08.764801 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"ad7502485ed3a449c63b6f15d39ff562ff07af0cd6bd752a9da1258223a6c65e"} Mar 18 09:04:08.764891 master-0 kubenswrapper[28766]: I0318 09:04:08.764826 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"a635c83202bec4f55d992caba66fbdd97cd46b5946ceda72de4cf60ec6fe987d"} Mar 18 09:04:08.764929 master-0 kubenswrapper[28766]: I0318 09:04:08.764881 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"3089c4545eabd68f6e478d7cb774f2b5eb5ad211b79b829bdc1706a3ac242a99"} Mar 18 09:04:08.764929 master-0 kubenswrapper[28766]: I0318 09:04:08.764912 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerDied","Data":"8518fd5fa5f57002df2dc9e0199a7271feebc95e929446acfa8563e63e176f72"} Mar 18 09:04:08.764989 master-0 kubenswrapper[28766]: I0318 09:04:08.764942 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"11a2f93448b9d54da9854663936e2b73","Type":"ContainerStarted","Data":"f2c2ecd78b0b095cca6d610f53e1ff83eedc17b6a054e2d1a3484b11ec8181f6"} Mar 18 09:04:08.765102 master-0 kubenswrapper[28766]: I0318 09:04:08.765073 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b597f433a55dbc7ccb00fbe5afce037857951640d297dcf4696ad9ed735151f" Mar 18 09:04:08.765167 master-0 kubenswrapper[28766]: I0318 09:04:08.765143 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0506e567232af6a1d871e8bdc27ad4000f63b8618b9625c8e1c8682da50383b" Mar 18 09:04:08.766725 master-0 kubenswrapper[28766]: I0318 09:04:08.766708 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.766870 master-0 kubenswrapper[28766]: I0318 09:04:08.766838 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.766958 master-0 kubenswrapper[28766]: I0318 09:04:08.766927 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.855744 master-0 kubenswrapper[28766]: I0318 09:04:08.855571 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:08.856184 master-0 kubenswrapper[28766]: I0318 09:04:08.856141 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:08.856415 master-0 kubenswrapper[28766]: I0318 09:04:08.856381 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:04:08.856610 master-0 kubenswrapper[28766]: I0318 09:04:08.856579 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.856787 master-0 kubenswrapper[28766]: I0318 09:04:08.856756 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.856988 master-0 kubenswrapper[28766]: I0318 09:04:08.856960 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.857166 master-0 kubenswrapper[28766]: I0318 09:04:08.857139 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:08.857346 master-0 kubenswrapper[28766]: I0318 09:04:08.857308 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:08.857726 master-0 kubenswrapper[28766]: I0318 09:04:08.857691 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:04:08.857976 master-0 kubenswrapper[28766]: I0318 09:04:08.857941 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.858246 master-0 kubenswrapper[28766]: I0318 09:04:08.858218 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.858427 master-0 kubenswrapper[28766]: I0318 09:04:08.858394 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.858616 master-0 kubenswrapper[28766]: I0318 09:04:08.858583 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.859026 master-0 kubenswrapper[28766]: I0318 09:04:08.858955 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.859346 master-0 kubenswrapper[28766]: I0318 09:04:08.859308 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.859598 master-0 kubenswrapper[28766]: I0318 09:04:08.859568 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.859783 master-0 kubenswrapper[28766]: I0318 09:04:08.859756 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.860118 master-0 kubenswrapper[28766]: I0318 09:04:08.860055 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.860337 master-0 kubenswrapper[28766]: I0318 09:04:08.860309 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.860512 master-0 kubenswrapper[28766]: I0318 09:04:08.860485 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.911160 master-0 kubenswrapper[28766]: I0318 09:04:08.911078 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:08.916350 master-0 kubenswrapper[28766]: I0318 09:04:08.916285 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:08.916350 master-0 kubenswrapper[28766]: I0318 09:04:08.916341 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:08.916350 master-0 kubenswrapper[28766]: I0318 09:04:08.916355 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:08.916671 master-0 kubenswrapper[28766]: I0318 09:04:08.916381 28766 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:04:08.917701 master-0 kubenswrapper[28766]: E0318 09:04:08.917611 28766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:04:08.963015 master-0 kubenswrapper[28766]: I0318 09:04:08.962832 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.963015 master-0 kubenswrapper[28766]: I0318 09:04:08.962963 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.963015 master-0 kubenswrapper[28766]: I0318 09:04:08.963000 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.963015 master-0 kubenswrapper[28766]: I0318 09:04:08.963027 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.963502 master-0 kubenswrapper[28766]: I0318 09:04:08.963222 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.963502 master-0 kubenswrapper[28766]: I0318 09:04:08.963334 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.963502 master-0 kubenswrapper[28766]: I0318 09:04:08.963400 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.964008 master-0 kubenswrapper[28766]: I0318 09:04:08.963487 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.964008 master-0 kubenswrapper[28766]: I0318 09:04:08.963555 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.964008 master-0 kubenswrapper[28766]: I0318 09:04:08.963597 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.964008 master-0 kubenswrapper[28766]: I0318 09:04:08.963659 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.964008 master-0 kubenswrapper[28766]: I0318 09:04:08.963708 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.964008 master-0 kubenswrapper[28766]: I0318 09:04:08.963887 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.964008 master-0 kubenswrapper[28766]: I0318 09:04:08.963931 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.964691 master-0 kubenswrapper[28766]: I0318 09:04:08.964472 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.964691 master-0 kubenswrapper[28766]: I0318 09:04:08.964534 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.964691 master-0 kubenswrapper[28766]: I0318 09:04:08.964591 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:08.964691 master-0 kubenswrapper[28766]: I0318 09:04:08.964648 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:04:08.964691 master-0 kubenswrapper[28766]: I0318 09:04:08.964683 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.965291 master-0 kubenswrapper[28766]: I0318 09:04:08.964818 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:08.965291 master-0 kubenswrapper[28766]: I0318 09:04:08.964910 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:04:08.965291 master-0 kubenswrapper[28766]: I0318 09:04:08.964963 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.965291 master-0 kubenswrapper[28766]: I0318 09:04:08.965101 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.965291 master-0 kubenswrapper[28766]: I0318 09:04:08.965219 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.965291 master-0 kubenswrapper[28766]: I0318 09:04:08.965260 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965329 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965343 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965375 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965469 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965473 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965530 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965546 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965586 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/11a2f93448b9d54da9854663936e2b73-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"11a2f93448b9d54da9854663936e2b73\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965621 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965636 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965697 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.965727 master-0 kubenswrapper[28766]: I0318 09:04:08.965736 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:08.966926 master-0 kubenswrapper[28766]: I0318 09:04:08.965749 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.966926 master-0 kubenswrapper[28766]: I0318 09:04:08.965689 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:08.966926 master-0 kubenswrapper[28766]: I0318 09:04:08.965921 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:09.070989 master-0 kubenswrapper[28766]: I0318 09:04:09.070225 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:09.075115 master-0 kubenswrapper[28766]: I0318 09:04:09.075050 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:09.075115 master-0 kubenswrapper[28766]: I0318 09:04:09.075106 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:09.075115 master-0 kubenswrapper[28766]: I0318 09:04:09.075119 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:09.169199 master-0 kubenswrapper[28766]: I0318 09:04:09.169129 28766 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 09:04:09.317984 master-0 kubenswrapper[28766]: I0318 09:04:09.317921 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:09.321244 master-0 kubenswrapper[28766]: I0318 09:04:09.321207 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:09.321244 master-0 kubenswrapper[28766]: I0318 09:04:09.321255 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:09.321399 master-0 kubenswrapper[28766]: I0318 09:04:09.321266 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:09.321399 master-0 kubenswrapper[28766]: I0318 09:04:09.321290 28766 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:04:09.322388 master-0 kubenswrapper[28766]: E0318 09:04:09.322332 28766 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 09:04:09.469515 master-0 kubenswrapper[28766]: I0318 09:04:09.469444 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:09.540755 master-0 kubenswrapper[28766]: I0318 09:04:09.540682 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:09.541132 master-0 kubenswrapper[28766]: I0318 09:04:09.541022 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:09.541213 master-0 kubenswrapper[28766]: I0318 09:04:09.541136 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:09.541534 master-0 kubenswrapper[28766]: I0318 09:04:09.541498 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:09.543016 master-0 kubenswrapper[28766]: I0318 09:04:09.541500 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:09.543016 master-0 kubenswrapper[28766]: I0318 09:04:09.541560 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188"} Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556244 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556302 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556333 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556360 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556383 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556375 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556419 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556431 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556475 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556490 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556513 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556606 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556433 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556661 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:09.557444 master-0 kubenswrapper[28766]: I0318 09:04:09.556668 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:10.123087 master-0 kubenswrapper[28766]: I0318 09:04:10.123020 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:10.126143 master-0 kubenswrapper[28766]: I0318 09:04:10.126070 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:10.126215 master-0 kubenswrapper[28766]: I0318 09:04:10.126153 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:10.126215 master-0 kubenswrapper[28766]: I0318 09:04:10.126166 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:10.126215 master-0 kubenswrapper[28766]: I0318 09:04:10.126197 28766 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:04:10.247788 master-0 kubenswrapper[28766]: I0318 09:04:10.247746 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:10.490474 master-0 kubenswrapper[28766]: I0318 09:04:10.490425 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 09:04:10.552273 master-0 kubenswrapper[28766]: I0318 09:04:10.552204 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"9e39226f66d3647b6d3e60dfa41a65af602b2c0ac717809011f105e2b66ccbc2"} Mar 18 09:04:10.552273 master-0 kubenswrapper[28766]: I0318 09:04:10.552267 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c"} Mar 18 09:04:10.552273 master-0 kubenswrapper[28766]: I0318 09:04:10.552283 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89"} Mar 18 09:04:10.552647 master-0 kubenswrapper[28766]: I0318 09:04:10.552296 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0"} Mar 18 09:04:10.552647 master-0 kubenswrapper[28766]: I0318 09:04:10.552346 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:10.552647 master-0 kubenswrapper[28766]: I0318 09:04:10.552363 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:10.552647 master-0 kubenswrapper[28766]: I0318 09:04:10.552410 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:10.557458 master-0 kubenswrapper[28766]: I0318 09:04:10.557411 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:10.557550 master-0 kubenswrapper[28766]: I0318 09:04:10.557467 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:10.557550 master-0 kubenswrapper[28766]: I0318 09:04:10.557482 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:10.557550 master-0 kubenswrapper[28766]: I0318 09:04:10.557496 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:10.557550 master-0 kubenswrapper[28766]: I0318 09:04:10.557530 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:10.557550 master-0 kubenswrapper[28766]: I0318 09:04:10.557544 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:10.557882 master-0 kubenswrapper[28766]: I0318 09:04:10.557827 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:10.557949 master-0 kubenswrapper[28766]: I0318 09:04:10.557890 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:10.557949 master-0 kubenswrapper[28766]: I0318 09:04:10.557907 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:10.799446 master-0 kubenswrapper[28766]: I0318 09:04:10.799375 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:11.041904 master-0 kubenswrapper[28766]: I0318 09:04:11.041839 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: I0318 09:04:11.048300 28766 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]log ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]etcd ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/priority-and-fairness-filter ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-apiextensions-informers ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [-]poststarthook/crd-informer-synced failed: reason withheld Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-system-namespaces-controller ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [-]poststarthook/bootstrap-controller failed: reason withheld Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/start-kube-aggregator-informers ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]autoregister-completion ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/apiservice-openapi-controller ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 18 09:04:11.048375 master-0 kubenswrapper[28766]: livez check failed Mar 18 09:04:11.049565 master-0 kubenswrapper[28766]: I0318 09:04:11.048400 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:11.561118 master-0 kubenswrapper[28766]: I0318 09:04:11.559996 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:11.567872 master-0 kubenswrapper[28766]: I0318 09:04:11.561836 28766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:11.567872 master-0 kubenswrapper[28766]: I0318 09:04:11.563024 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:11.567872 master-0 kubenswrapper[28766]: I0318 09:04:11.563385 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:11.567872 master-0 kubenswrapper[28766]: I0318 09:04:11.563480 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:11.567872 master-0 kubenswrapper[28766]: I0318 09:04:11.563495 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:11.576883 master-0 kubenswrapper[28766]: I0318 09:04:11.574293 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:11.576883 master-0 kubenswrapper[28766]: I0318 09:04:11.574382 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:11.576883 master-0 kubenswrapper[28766]: I0318 09:04:11.574403 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:12.513156 master-0 kubenswrapper[28766]: I0318 09:04:12.513113 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:12.513577 master-0 kubenswrapper[28766]: I0318 09:04:12.513563 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:12.516267 master-0 kubenswrapper[28766]: I0318 09:04:12.516205 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:12.516341 master-0 kubenswrapper[28766]: I0318 09:04:12.516285 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:12.516341 master-0 kubenswrapper[28766]: I0318 09:04:12.516302 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:12.541135 master-0 kubenswrapper[28766]: I0318 09:04:12.541101 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:12.546022 master-0 kubenswrapper[28766]: I0318 09:04:12.545969 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:12.569247 master-0 kubenswrapper[28766]: I0318 09:04:12.569201 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:12.569686 master-0 kubenswrapper[28766]: I0318 09:04:12.569221 28766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:12.569686 master-0 kubenswrapper[28766]: I0318 09:04:12.569369 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:12.572904 master-0 kubenswrapper[28766]: I0318 09:04:12.572870 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:12.573033 master-0 kubenswrapper[28766]: I0318 09:04:12.572998 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:12.573089 master-0 kubenswrapper[28766]: I0318 09:04:12.573044 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:12.573089 master-0 kubenswrapper[28766]: I0318 09:04:12.573062 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:12.573089 master-0 kubenswrapper[28766]: I0318 09:04:12.573010 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:12.573190 master-0 kubenswrapper[28766]: I0318 09:04:12.573112 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:13.575547 master-0 kubenswrapper[28766]: I0318 09:04:13.575500 28766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:13.576260 master-0 kubenswrapper[28766]: I0318 09:04:13.575560 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:13.578074 master-0 kubenswrapper[28766]: I0318 09:04:13.578042 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:13.578152 master-0 kubenswrapper[28766]: I0318 09:04:13.578090 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:13.578152 master-0 kubenswrapper[28766]: I0318 09:04:13.578101 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:15.924394 master-0 kubenswrapper[28766]: I0318 09:04:15.924347 28766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 09:04:15.929334 master-0 kubenswrapper[28766]: I0318 09:04:15.929288 28766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 09:04:15.935224 master-0 kubenswrapper[28766]: I0318 09:04:15.931624 28766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 09:04:15.937253 master-0 kubenswrapper[28766]: E0318 09:04:15.937216 28766 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 18 09:04:15.940449 master-0 kubenswrapper[28766]: I0318 09:04:15.939875 28766 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 09:04:16.046310 master-0 kubenswrapper[28766]: I0318 09:04:16.046256 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:16.046526 master-0 kubenswrapper[28766]: I0318 09:04:16.046437 28766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:16.046526 master-0 kubenswrapper[28766]: I0318 09:04:16.046490 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:16.049523 master-0 kubenswrapper[28766]: I0318 09:04:16.049480 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:16.049706 master-0 kubenswrapper[28766]: I0318 09:04:16.049533 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:16.049706 master-0 kubenswrapper[28766]: I0318 09:04:16.049545 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:16.076994 master-0 kubenswrapper[28766]: I0318 09:04:16.076956 28766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 09:04:16.077477 master-0 kubenswrapper[28766]: I0318 09:04:16.077443 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:16.163957 master-0 kubenswrapper[28766]: I0318 09:04:16.163837 28766 apiserver.go:52] "Watching apiserver" Mar 18 09:04:16.188099 master-0 kubenswrapper[28766]: I0318 09:04:16.187954 28766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 09:04:16.191598 master-0 kubenswrapper[28766]: I0318 09:04:16.191510 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x","openshift-controller-manager/controller-manager-6448dc88d8-cnd9q","openshift-dns-operator/dns-operator-9c5679d8f-b9pn7","openshift-dns/dns-default-ck7b5","openshift-etcd/etcd-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn","openshift-marketplace/redhat-marketplace-jg58c","openshift-marketplace/redhat-operators-pk9z9","openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f","openshift-multus/multus-admission-controller-58c9f8fc64-zgrts","openshift-multus/multus-bpf5c","openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8","openshift-ingress-operator/ingress-operator-66b84d69b-7h94d","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb","openshift-monitoring/kube-state-metrics-7bbc969446-dblgh","openshift-monitoring/telemeter-client-5d4d5995f-s5dw8","openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth","openshift-kube-controller-manager/installer-2-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg","openshift-multus/network-metrics-daemon-6x85n","openshift-network-node-identity/network-node-identity-n5vqx","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh","assisted-installer/assisted-installer-controller-zq2ds","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8","openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb","openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr","openshift-ovn-kubernetes/ovnkube-node-cxws9","openshift-kube-scheduler/installer-3-master-0","openshift-monitoring/metrics-server-59f88c66c8-z4c2f","openshift-etcd/installer-2-master-0","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7","openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp","openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz","openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8","openshift-insights/insights-operator-68bf6ff9d6-kv7n5","openshift-kube-apiserver/installer-1-master-0","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj","openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82","openshift-machine-config-operator/machine-config-daemon-qsj46","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n","openshift-cluster-version/cluster-version-operator-7d58488df-8btcx","openshift-ingress/router-default-7dcf5569b5-8sbgd","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n","openshift-marketplace/certified-operators-vng9w","openshift-service-ca/service-ca-79bc6b8d76-5jj7d","openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls","openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl","openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh","openshift-ingress-canary/ingress-canary-mpw9b","openshift-kube-apiserver/installer-3-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-diagnostics/network-check-target-8b7l7","openshift-network-operator/network-operator-7bd846bfc4-5r5r4","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j","openshift-etcd/installer-1-master-0","openshift-kube-scheduler/revision-pruner-6-master-0","openshift-marketplace/marketplace-operator-89ccd998f-bcwsv","openshift-monitoring/node-exporter-75szk","openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6","openshift-multus/multus-additional-cni-plugins-xpzrz","openshift-apiserver/apiserver-7bb69b5c5c-djsr9","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7","openshift-kube-controller-manager/installer-4-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x","openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9","openshift-machine-config-operator/machine-config-server-2jsz9","openshift-network-operator/iptables-alerter-9mkgd","openshift-cluster-node-tuning-operator/tuned-zzqc6","openshift-dns/node-resolver-zwl77","openshift-marketplace/community-operators-78szh","openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq","openshift-kube-scheduler/installer-6-master-0","openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf","openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f"] Mar 18 09:04:16.206879 master-0 kubenswrapper[28766]: I0318 09:04:16.204099 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-zq2ds" Mar 18 09:04:16.206879 master-0 kubenswrapper[28766]: I0318 09:04:16.205314 28766 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="a0d3f5cc-10b4-4bfe-8f71-c5053b35a5ba" Mar 18 09:04:16.226889 master-0 kubenswrapper[28766]: I0318 09:04:16.219412 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 09:04:16.226889 master-0 kubenswrapper[28766]: I0318 09:04:16.219944 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 09:04:16.226889 master-0 kubenswrapper[28766]: I0318 09:04:16.221678 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 09:04:16.226889 master-0 kubenswrapper[28766]: I0318 09:04:16.222001 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 09:04:16.226889 master-0 kubenswrapper[28766]: I0318 09:04:16.222218 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 09:04:16.226889 master-0 kubenswrapper[28766]: I0318 09:04:16.225044 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 09:04:16.226889 master-0 kubenswrapper[28766]: I0318 09:04:16.225281 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 09:04:16.226889 master-0 kubenswrapper[28766]: I0318 09:04:16.225389 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.229842 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.232368 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.232414 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.232628 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.232981 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.233348 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.233564 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.233929 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.234180 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.234331 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.234549 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.234885 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235017 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235109 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235162 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235271 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235403 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235549 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235637 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235670 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.235779 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.237037 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.237176 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.237742 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 09:04:16.238881 master-0 kubenswrapper[28766]: I0318 09:04:16.238094 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.242274 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.242692 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.242749 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.242775 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243487 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243991 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243533 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243554 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243674 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243669 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.244819 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243679 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243726 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243843 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243934 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243948 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.243968 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.246016 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.246146 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.246200 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.246254 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.246406 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.247249 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.247294 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.247423 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.247582 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.247699 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.247737 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.248449 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:04:16.251877 master-0 kubenswrapper[28766]: I0318 09:04:16.248948 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.251951 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252077 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252130 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252191 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252243 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252275 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252328 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252389 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252397 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252411 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252496 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252556 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252501 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252556 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252636 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252690 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252703 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252570 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252776 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252636 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252800 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252818 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252841 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252869 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252700 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252639 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252783 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.252987 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.253120 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 09:04:16.253165 master-0 kubenswrapper[28766]: I0318 09:04:16.253134 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 09:04:16.254190 master-0 kubenswrapper[28766]: I0318 09:04:16.253209 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 09:04:16.257918 master-0 kubenswrapper[28766]: I0318 09:04:16.255618 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:16.267603 master-0 kubenswrapper[28766]: I0318 09:04:16.267546 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 09:04:16.268075 master-0 kubenswrapper[28766]: I0318 09:04:16.268025 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 09:04:16.268462 master-0 kubenswrapper[28766]: I0318 09:04:16.268429 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 09:04:16.269288 master-0 kubenswrapper[28766]: I0318 09:04:16.269256 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 09:04:16.270170 master-0 kubenswrapper[28766]: I0318 09:04:16.270115 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 09:04:16.277052 master-0 kubenswrapper[28766]: I0318 09:04:16.277011 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 09:04:16.293089 master-0 kubenswrapper[28766]: I0318 09:04:16.293042 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 09:04:16.296504 master-0 kubenswrapper[28766]: I0318 09:04:16.296474 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 09:04:16.296664 master-0 kubenswrapper[28766]: I0318 09:04:16.296649 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:16.296725 master-0 kubenswrapper[28766]: I0318 09:04:16.296584 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:16.296783 master-0 kubenswrapper[28766]: I0318 09:04:16.296521 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 18 09:04:16.296843 master-0 kubenswrapper[28766]: I0318 09:04:16.296795 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 18 09:04:16.297128 master-0 kubenswrapper[28766]: I0318 09:04:16.297072 28766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:16.299909 master-0 kubenswrapper[28766]: I0318 09:04:16.298207 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 09:04:16.299909 master-0 kubenswrapper[28766]: I0318 09:04:16.299197 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 09:04:16.299909 master-0 kubenswrapper[28766]: I0318 09:04:16.299795 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 09:04:16.302586 master-0 kubenswrapper[28766]: I0318 09:04:16.302317 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 09:04:16.306902 master-0 kubenswrapper[28766]: I0318 09:04:16.304671 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 09:04:16.306902 master-0 kubenswrapper[28766]: I0318 09:04:16.305022 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 09:04:16.316314 master-0 kubenswrapper[28766]: I0318 09:04:16.316260 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 09:04:16.337503 master-0 kubenswrapper[28766]: I0318 09:04:16.337439 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343313 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343370 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343397 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlp7w\" (UniqueName: \"kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343427 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343453 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8prf\" (UniqueName: \"kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343475 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343501 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343523 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343550 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwsfl\" (UniqueName: \"kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343570 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfzdk\" (UniqueName: \"kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343591 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343614 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343632 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343651 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njbjp\" (UniqueName: \"kubernetes.io/projected/fa8f1797-0219-49fe-82b5-7416cc481c3a-kube-api-access-njbjp\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343676 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343694 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343714 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343737 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-utilities\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343756 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343778 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxfc\" (UniqueName: \"kubernetes.io/projected/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-api-access-rpxfc\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343796 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwg9\" (UniqueName: \"kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343817 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343840 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9qkd\" (UniqueName: \"kubernetes.io/projected/ccf74af5-d4fd-4ed3-9784-42397ea798c5-kube-api-access-p9qkd\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343879 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343901 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.343869 master-0 kubenswrapper[28766]: I0318 09:04:16.343922 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.343942 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.343960 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.343980 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344001 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344024 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344045 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-utilities\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344064 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-run\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344088 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344110 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344139 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344163 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344193 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344219 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344238 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-conf\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344259 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-tmp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344282 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-default-certificate\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344302 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-trusted-ca-bundle\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344331 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344364 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9rk\" (UniqueName: \"kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344387 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344409 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344436 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344461 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344482 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-systemd\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344508 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbsgx\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-kube-api-access-fbsgx\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344532 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-catalog-content\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344552 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344572 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjmx\" (UniqueName: \"kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344596 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsc6v\" (UniqueName: \"kubernetes.io/projected/f650e6f0-fb74-4083-a7a9-fa4df513108f-kube-api-access-tsc6v\") pod \"network-check-source-b4bf74f6-7z5jl\" (UID: \"f650e6f0-fb74-4083-a7a9-fa4df513108f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344616 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344633 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-audit\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344656 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-stats-auth\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344675 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344694 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344712 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344729 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344754 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344780 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344804 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.344845 master-0 kubenswrapper[28766]: I0318 09:04:16.344827 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-kubernetes\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.346108 master-0 kubenswrapper[28766]: I0318 09:04:16.345966 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-catalog-content\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:16.346140 master-0 kubenswrapper[28766]: I0318 09:04:16.346111 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-utilities\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:16.346545 master-0 kubenswrapper[28766]: I0318 09:04:16.346503 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2207df9e-f21e-4c30-98d5-248ae99c245e-ovn-node-metrics-cert\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.346611 master-0 kubenswrapper[28766]: I0318 09:04:16.346576 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-metrics-certs\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 09:04:16.346841 master-0 kubenswrapper[28766]: I0318 09:04:16.346788 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.346841 master-0 kubenswrapper[28766]: I0318 09:04:16.346810 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/edc7f629-4288-443b-aa8e-78bc6a09c848-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 09:04:16.347062 master-0 kubenswrapper[28766]: I0318 09:04:16.347034 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-utilities\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:16.347122 master-0 kubenswrapper[28766]: I0318 09:04:16.347093 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a268d595-18c2-43a2-8ed5-eb64c76c490f-catalog-content\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:16.347460 master-0 kubenswrapper[28766]: I0318 09:04:16.347420 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-trusted-ca\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 09:04:16.347524 master-0 kubenswrapper[28766]: I0318 09:04:16.347454 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-config\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.347737 master-0 kubenswrapper[28766]: I0318 09:04:16.347703 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-tmpfs\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:16.347812 master-0 kubenswrapper[28766]: I0318 09:04:16.347743 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.347812 master-0 kubenswrapper[28766]: I0318 09:04:16.347752 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 09:04:16.347812 master-0 kubenswrapper[28766]: I0318 09:04:16.347770 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 09:04:16.347812 master-0 kubenswrapper[28766]: I0318 09:04:16.347797 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.347825 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkfql\" (UniqueName: \"kubernetes.io/projected/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-kube-api-access-zkfql\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.347845 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.347896 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.347921 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm6nf\" (UniqueName: \"kubernetes.io/projected/52e32e2d-33ab-4351-ae8a-80acd6077d70-kube-api-access-dm6nf\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.347925 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0280499-8277-46f0-bd8c-058a47a99e19-config\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.347943 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.347964 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-etcd-client\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.347984 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348006 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ccf74af5-d4fd-4ed3-9784-42397ea798c5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348033 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348055 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348078 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348100 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348128 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfjgn\" (UniqueName: \"kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348149 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348168 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-sys\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348174 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/43fbd379-dd1e-4287-bd76-fd3ec51cde43-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348193 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjq4w\" (UniqueName: \"kubernetes.io/projected/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-kube-api-access-gjq4w\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348421 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-modprobe-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348453 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lsw9\" (UniqueName: \"kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348476 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-catalog-content\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348495 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjrfz\" (UniqueName: \"kubernetes.io/projected/a7dab805-612b-404c-ab97-8cee927169db-kube-api-access-pjrfz\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348517 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-encryption-config\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348534 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348556 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-trusted-ca-bundle\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348579 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348602 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9w7l\" (UniqueName: \"kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348624 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348645 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/998cabe9-d479-439f-b1c0-1d8c49aefeb9-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wkgdb\" (UID: \"998cabe9-d479-439f-b1c0-1d8c49aefeb9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348669 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348692 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-encryption-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348715 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-metrics-certs\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348736 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ngk7\" (UniqueName: \"kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348762 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348770 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-tmp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348786 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b689k\" (UniqueName: \"kubernetes.io/projected/e64ea71a-1e89-409a-9607-4d3cea093643-kube-api-access-b689k\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348811 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348835 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w58l\" (UniqueName: \"kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348871 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348888 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-catalog-content\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348901 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-serving-cert\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348979 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349004 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-audit-policies\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349026 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2700f537-8f31-4380-a527-3e697a8122cc-audit-dir\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349046 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349066 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n959l\" (UniqueName: \"kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349088 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349193 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.348578 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349382 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349547 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-encryption-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349564 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f65344cd-8571-4a78-927f-eec46ec1af51-catalog-content\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349748 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-serving-cert\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.349749 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e2ade7e6-cecd-4e98-8f85-ea8219303d75-operand-assets\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350069 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-tmpfs\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350087 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-metrics-tls\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350226 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350453 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350455 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7b72267-fc08-41ed-a92b-9fca7372aba6-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350601 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/573d3a02-e395-4816-963a-cd614ef53f75-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350631 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7962fb40-1170-4c00-b1bf-92966aeae807-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350676 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-config\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350644 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350756 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350801 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/91a6fa86-8c58-43bc-a2d4-2b20901269f7-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350840 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350878 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350897 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350945 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxvk7\" (UniqueName: \"kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.350974 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.351013 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgs9m\" (UniqueName: \"kubernetes.io/projected/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-kube-api-access-rgs9m\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 09:04:16.350926 master-0 kubenswrapper[28766]: I0318 09:04:16.351040 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5q4t\" (UniqueName: \"kubernetes.io/projected/d71aa1b9-6eb5-4331-b959-8930e10817b4-kube-api-access-x5q4t\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351074 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchll\" (UniqueName: \"kubernetes.io/projected/29ba6765-61c9-4f78-8f44-570418000c5c-kube-api-access-xchll\") pod \"csi-snapshot-controller-64854d9cff-khm5n\" (UID: \"29ba6765-61c9-4f78-8f44-570418000c5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351106 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351143 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351162 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/91a6fa86-8c58-43bc-a2d4-2b20901269f7-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351256 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351284 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351312 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351335 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351351 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-serving-cert\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351421 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351465 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-env-overrides\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351307 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7962fb40-1170-4c00-b1bf-92966aeae807-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351566 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/59d50dd5-6793-4f96-a769-31e086ecc7e4-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351612 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-daemon-config\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351645 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260c8aa5-a288-4ee8-b671-f97e90a2f39c-config\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351656 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-host\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351709 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/939efa41-8f40-4f91-bee4-0425aead9760-etcd-client\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351735 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351754 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07a4fd92-0fd1-4688-b2db-de615d75971e-metrics-tls\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.351803 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352010 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352038 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ade7e6-cecd-4e98-8f85-ea8219303d75-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352072 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352162 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352192 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/edc7f629-4288-443b-aa8e-78bc6a09c848-env-overrides\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352251 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzxp\" (UniqueName: \"kubernetes.io/projected/f826efe0-60a1-4465-b8d0-d4069ed507a1-kube-api-access-6bzxp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352272 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11012b-536a-422f-afc4-d2d0fd4b67fb-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352256 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352348 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772bc250-2e57-4ce0-883c-d44281fcb0be-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352401 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352458 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47p9x\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352514 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352544 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352580 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-utilities\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352682 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92542f7c-182b-45a8-bbf3-00e99ba7acee-utilities\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352705 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352758 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352808 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352832 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352882 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdvp\" (UniqueName: \"kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352894 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/939efa41-8f40-4f91-bee4-0425aead9760-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.352999 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353181 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-config\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353260 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353293 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353315 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353371 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353392 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353416 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353441 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-kube-api-access\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353504 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353534 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glt6c\" (UniqueName: \"kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353561 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpl2c\" (UniqueName: \"kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353586 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qrqx\" (UniqueName: \"kubernetes.io/projected/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-kube-api-access-5qrqx\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353606 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/31a92270-efed-44fe-871e-90333235e85f-snapshots\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353661 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353713 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtz82\" (UniqueName: \"kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353803 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353827 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msp8\" (UniqueName: \"kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.353871 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnspk\" (UniqueName: \"kubernetes.io/projected/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-kube-api-access-jnspk\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354177 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/31a92270-efed-44fe-871e-90333235e85f-snapshots\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354239 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354254 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5982111d-f4c6-4335-9b40-3142758fc2bc-config\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354373 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djq7n\" (UniqueName: \"kubernetes.io/projected/f65344cd-8571-4a78-927f-eec46ec1af51-kube-api-access-djq7n\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354470 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354556 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp77s\" (UniqueName: \"kubernetes.io/projected/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-kube-api-access-tp77s\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354603 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354607 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354674 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354761 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lv7n\" (UniqueName: \"kubernetes.io/projected/92542f7c-182b-45a8-bbf3-00e99ba7acee-kube-api-access-4lv7n\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354841 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354956 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqlhh\" (UniqueName: \"kubernetes.io/projected/68465463-5f2a-4e74-9c34-2706a185f7ea-kube-api-access-gqlhh\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.354981 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrdc\" (UniqueName: \"kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355033 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355060 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355107 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355132 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpj79\" (UniqueName: \"kubernetes.io/projected/b5f9f50b-e7b4-4b81-864b-349303f21447-kube-api-access-bpj79\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355206 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlvd\" (UniqueName: \"kubernetes.io/projected/fc5a9875-d97e-4371-a15d-a1f43b85abce-kube-api-access-mvlvd\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355232 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355251 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355270 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-serving-cert\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355292 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355317 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqldd\" (UniqueName: \"kubernetes.io/projected/2700f537-8f31-4380-a527-3e697a8122cc-kube-api-access-dqldd\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355338 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355617 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355639 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355639 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-env-overrides\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355662 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43fbd379-dd1e-4287-bd76-fd3ec51cde43-cache\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355797 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cni-binary-copy\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355876 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/43fbd379-dd1e-4287-bd76-fd3ec51cde43-cache\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355914 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772bc250-2e57-4ce0-883c-d44281fcb0be-config\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.355993 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.356112 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5982111d-f4c6-4335-9b40-3142758fc2bc-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.356115 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf89a76-7a94-46d3-853e-68e986563764-config\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.356168 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.356204 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-tuned\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.356265 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 09:04:16.356136 master-0 kubenswrapper[28766]: I0318 09:04:16.356282 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-tuned\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356320 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356360 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356407 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356450 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-utilities\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356488 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj9fr\" (UniqueName: \"kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356497 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2207df9e-f21e-4c30-98d5-248ae99c245e-ovnkube-script-lib\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356509 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysconfig\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356550 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6zq8\" (UniqueName: \"kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356576 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/866c259c-7661-4a80-873b-6fd625218665-iptables-alerter-script\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356584 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxcn\" (UniqueName: \"kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-j8kgj\" (UID: \"6fb1f871-9c24-48a1-a15a-a636b5bb687d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356611 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfzdp\" (UniqueName: \"kubernetes.io/projected/a268d595-18c2-43a2-8ed5-eb64c76c490f-kube-api-access-hfzdp\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356659 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-utilities\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356735 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356767 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356793 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356885 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-lib-modules\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.356913 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-var-lib-kubelet\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357532 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-client\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357565 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357592 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357615 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357639 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357664 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357693 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-cache\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357720 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vfrs\" (UniqueName: \"kubernetes.io/projected/ffc5379c-651f-490c-90f4-1285b9093596-kube-api-access-4vfrs\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357744 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68465463-5f2a-4e74-9c34-2706a185f7ea-hosts-file\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.357769 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359270 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbsfs\" (UniqueName: \"kubernetes.io/projected/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-kube-api-access-hbsfs\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359308 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svdhs\" (UniqueName: \"kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359339 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359369 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359401 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359431 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359460 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359488 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz26d\" (UniqueName: \"kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359519 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359550 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-node-pullsecrets\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359577 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359607 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359633 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359664 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bwv\" (UniqueName: \"kubernetes.io/projected/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8-kube-api-access-d2bwv\") pod \"migrator-8487694857-ld5l8\" (UID: \"8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359692 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw5tw\" (UniqueName: \"kubernetes.io/projected/b9768e50-c883-47b0-b319-851fa53ac19a-kube-api-access-bw5tw\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359720 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359744 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-audit-dir\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359771 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52pj\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-kube-api-access-c52pj\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.359796 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360349 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360377 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360413 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360458 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360508 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360551 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9jq\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360583 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360615 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m5wf\" (UniqueName: \"kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.358084 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0280499-8277-46f0-bd8c-058a47a99e19-serving-cert\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360646 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360707 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7dab805-612b-404c-ab97-8cee927169db-rootfs\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:16.360713 master-0 kubenswrapper[28766]: I0318 09:04:16.360773 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-cabundle\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.360806 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czm78\" (UniqueName: \"kubernetes.io/projected/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-kube-api-access-czm78\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.360860 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-key\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.360890 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.360956 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.360961 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/573d3a02-e395-4816-963a-cd614ef53f75-serving-cert\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361000 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361031 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361052 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-catalog-content\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361090 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361114 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361135 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-serving-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361177 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-image-import-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361201 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361241 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361323 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c110b293-2c6b-496b-b015-23aada98cb4b-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361328 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.358265 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-client\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361421 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.358509 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/16d633c5-e0aa-4fb6-83e0-a2e976334406-ovnkube-identity-cm\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.358720 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcf89a76-7a94-46d3-853e-68e986563764-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.358797 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-cache\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.359175 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b065df33-7911-456e-b3a2-1f8c8d53e053-srv-cert\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361735 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361752 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-etcd-serving-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361830 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e7b72267-fc08-41ed-a92b-9fca7372aba6-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.361985 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e025d334-20e7-491f-8027-194251398747-metrics-tls\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362146 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52e32e2d-33ab-4351-ae8a-80acd6077d70-catalog-content\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362179 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260c8aa5-a288-4ee8-b671-f97e90a2f39c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362193 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c110b293-2c6b-496b-b015-23aada98cb4b-serving-cert\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362216 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d9fe248-ba87-47e3-911a-1b2b112b5683-srv-cert\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362287 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhfh\" (UniqueName: \"kubernetes.io/projected/31a92270-efed-44fe-871e-90333235e85f-kube-api-access-8zhfh\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362312 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-etcd-serving-ca\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362333 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362341 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362356 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hn9w\" (UniqueName: \"kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362382 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362403 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362523 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362559 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f9fa104a-4979-4023-8d7e-a965f11bc7db-cni-binary-copy\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362560 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362596 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362614 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362632 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw27k\" (UniqueName: \"kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362731 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-service-ca\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362769 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362735 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16d633c5-e0aa-4fb6-83e0-a2e976334406-webhook-cert\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362844 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-config\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.362934 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.363048 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11012b-536a-422f-afc4-d2d0fd4b67fb-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 09:04:16.365677 master-0 kubenswrapper[28766]: I0318 09:04:16.364588 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-config\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.375281 master-0 kubenswrapper[28766]: I0318 09:04:16.374178 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 09:04:16.384456 master-0 kubenswrapper[28766]: I0318 09:04:16.384397 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-audit\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.387670 master-0 kubenswrapper[28766]: I0318 09:04:16.386618 28766 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 09:04:16.401035 master-0 kubenswrapper[28766]: I0318 09:04:16.400630 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 09:04:16.402284 master-0 kubenswrapper[28766]: I0318 09:04:16.402216 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-trusted-ca-bundle\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.413928 master-0 kubenswrapper[28766]: I0318 09:04:16.413876 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 09:04:16.421959 master-0 kubenswrapper[28766]: I0318 09:04:16.421918 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b5f9f50b-e7b4-4b81-864b-349303f21447-image-import-ca\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.438281 master-0 kubenswrapper[28766]: I0318 09:04:16.438177 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 09:04:16.455060 master-0 kubenswrapper[28766]: I0318 09:04:16.454806 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 09:04:16.466224 master-0 kubenswrapper[28766]: I0318 09:04:16.466169 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-node-pullsecrets\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.466224 master-0 kubenswrapper[28766]: I0318 09:04:16.466227 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.466441 master-0 kubenswrapper[28766]: I0318 09:04:16.466271 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:16.466441 master-0 kubenswrapper[28766]: I0318 09:04:16.466385 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.466441 master-0 kubenswrapper[28766]: I0318 09:04:16.466424 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-audit-dir\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.466536 master-0 kubenswrapper[28766]: I0318 09:04:16.466419 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-node-pullsecrets\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.466536 master-0 kubenswrapper[28766]: I0318 09:04:16.466460 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.466536 master-0 kubenswrapper[28766]: I0318 09:04:16.466482 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.466536 master-0 kubenswrapper[28766]: I0318 09:04:16.466501 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:16.466695 master-0 kubenswrapper[28766]: I0318 09:04:16.466576 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9f50b-e7b4-4b81-864b-349303f21447-audit-dir\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:16.466695 master-0 kubenswrapper[28766]: I0318 09:04:16.466626 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7dab805-612b-404c-ab97-8cee927169db-rootfs\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:16.466695 master-0 kubenswrapper[28766]: I0318 09:04:16.466584 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-multus-certs\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.466695 master-0 kubenswrapper[28766]: I0318 09:04:16.466672 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7dab805-612b-404c-ab97-8cee927169db-rootfs\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:16.466868 master-0 kubenswrapper[28766]: I0318 09:04:16.466693 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-kubelet\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.466868 master-0 kubenswrapper[28766]: I0318 09:04:16.466718 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 09:04:16.466868 master-0 kubenswrapper[28766]: I0318 09:04:16.466763 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.466868 master-0 kubenswrapper[28766]: I0318 09:04:16.466825 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.466868 master-0 kubenswrapper[28766]: I0318 09:04:16.466868 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.467024 master-0 kubenswrapper[28766]: I0318 09:04:16.466921 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:16.467024 master-0 kubenswrapper[28766]: I0318 09:04:16.466930 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-etc-kubernetes\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.467110 master-0 kubenswrapper[28766]: I0318 09:04:16.467072 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.467353 master-0 kubenswrapper[28766]: I0318 09:04:16.467168 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.467353 master-0 kubenswrapper[28766]: I0318 09:04:16.467201 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-wtmp\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.467353 master-0 kubenswrapper[28766]: I0318 09:04:16.467304 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.467968 master-0 kubenswrapper[28766]: I0318 09:04:16.467395 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-systemd-units\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.467968 master-0 kubenswrapper[28766]: I0318 09:04:16.467942 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.468074 master-0 kubenswrapper[28766]: I0318 09:04:16.467971 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-systemd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.468074 master-0 kubenswrapper[28766]: I0318 09:04:16.468041 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.468074 master-0 kubenswrapper[28766]: I0318 09:04:16.468065 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.468196 master-0 kubenswrapper[28766]: I0318 09:04:16.468110 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-k8s-cni-cncf-io\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.468196 master-0 kubenswrapper[28766]: I0318 09:04:16.468124 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-kubelet\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.468196 master-0 kubenswrapper[28766]: I0318 09:04:16.468141 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 09:04:16.468196 master-0 kubenswrapper[28766]: I0318 09:04:16.468176 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/07a4fd92-0fd1-4688-b2db-de615d75971e-host-etc-kube\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 09:04:16.468345 master-0 kubenswrapper[28766]: I0318 09:04:16.468193 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.468345 master-0 kubenswrapper[28766]: I0318 09:04:16.468260 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.468345 master-0 kubenswrapper[28766]: I0318 09:04:16.468223 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-slash\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.468345 master-0 kubenswrapper[28766]: I0318 09:04:16.468300 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:16.468345 master-0 kubenswrapper[28766]: I0318 09:04:16.468322 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.468345 master-0 kubenswrapper[28766]: I0318 09:04:16.468326 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-run\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.468560 master-0 kubenswrapper[28766]: I0318 09:04:16.468355 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-run\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.468560 master-0 kubenswrapper[28766]: I0318 09:04:16.468379 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:16.468560 master-0 kubenswrapper[28766]: I0318 09:04:16.468443 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-conf\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.468560 master-0 kubenswrapper[28766]: I0318 09:04:16.468476 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:16.468560 master-0 kubenswrapper[28766]: I0318 09:04:16.468536 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-systemd\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.468560 master-0 kubenswrapper[28766]: I0318 09:04:16.468561 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468589 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-sys\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468654 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468687 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltlf6\" (UniqueName: \"kubernetes.io/projected/06cbd48a-1f1d-4734-8d57-e1b6824879b6-kube-api-access-ltlf6\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468715 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468743 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468764 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468785 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-kubernetes\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468802 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-root\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468823 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468846 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8l2\" (UniqueName: \"kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468914 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468946 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ccf74af5-d4fd-4ed3-9784-42397ea798c5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.468977 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469019 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469052 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-sys\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469077 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469104 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-modprobe-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469148 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469269 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2700f537-8f31-4380-a527-3e697a8122cc-audit-dir\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469292 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469327 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469345 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469370 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469390 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469408 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469429 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysctl-conf\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469436 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469461 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469502 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-conf-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469535 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-systemd\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469551 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469536 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-log-socket\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469580 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.469572 master-0 kubenswrapper[28766]: I0318 09:04:16.469613 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-host\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469623 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-system-cni-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469635 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fql4\" (UniqueName: \"kubernetes.io/projected/e5ae1886-f90c-49f4-bf08-055b55dd785a-kube-api-access-4fql4\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469658 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-textfile\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469689 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469711 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-modprobe-d\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469725 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469746 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469748 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469778 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-bin\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469819 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2700f537-8f31-4380-a527-3e697a8122cc-audit-dir\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469831 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-kubernetes\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.469939 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-run-ovn\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470035 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-sys\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470100 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-os-release\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470094 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-cni-netd\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470143 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-etc-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470163 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-cnibin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470266 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470407 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470407 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-textfile\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470452 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-node-log\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.470583 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-host\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471050 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471091 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471112 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471095 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ccf74af5-d4fd-4ed3-9784-42397ea798c5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471161 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-run-netns\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471189 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-multus\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471219 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-hostroot\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471274 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471307 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r7hx\" (UniqueName: \"kubernetes.io/projected/4146a62d-e37b-4295-90ca-b23f5e3d1112-kube-api-access-4r7hx\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.471312 master-0 kubenswrapper[28766]: I0318 09:04:16.471360 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-cnibin\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.471438 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.471497 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.471566 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks4jl\" (UniqueName: \"kubernetes.io/projected/e0bb044f-5a4e-4981-8084-91348ce1a56a-kube-api-access-ks4jl\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.471691 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.471755 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.471823 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.471912 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.471982 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-system-cni-dir\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.472012 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.472022 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-multus-socket-dir-parent\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.472103 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f9fa104a-4979-4023-8d7e-a965f11bc7db-os-release\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.472365 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.472390 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.472449 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.472462 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-var-lib-openvswitch\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.472504 master-0 kubenswrapper[28766]: I0318 09:04:16.472472 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 09:04:16.473074 master-0 kubenswrapper[28766]: I0318 09:04:16.472572 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.473074 master-0 kubenswrapper[28766]: I0318 09:04:16.472608 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:16.473074 master-0 kubenswrapper[28766]: I0318 09:04:16.472649 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 09:04:16.473074 master-0 kubenswrapper[28766]: I0318 09:04:16.472671 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-var-lib-cni-bin\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.473074 master-0 kubenswrapper[28766]: I0318 09:04:16.472695 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/866c259c-7661-4a80-873b-6fd625218665-host-slash\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 09:04:16.473074 master-0 kubenswrapper[28766]: I0318 09:04:16.472872 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.473074 master-0 kubenswrapper[28766]: I0318 09:04:16.472968 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2207df9e-f21e-4c30-98d5-248ae99c245e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:16.473708 master-0 kubenswrapper[28766]: I0318 09:04:16.473027 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttnk9\" (UniqueName: \"kubernetes.io/projected/d0272f7c-bedc-44cf-9790-88e10e6dda03-kube-api-access-ttnk9\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 09:04:16.473813 master-0 kubenswrapper[28766]: I0318 09:04:16.473781 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysconfig\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.473973 master-0 kubenswrapper[28766]: I0318 09:04:16.473925 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.474040 master-0 kubenswrapper[28766]: I0318 09:04:16.474026 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-lib-modules\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.474099 master-0 kubenswrapper[28766]: I0318 09:04:16.474048 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-var-lib-kubelet\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.474140 master-0 kubenswrapper[28766]: I0318 09:04:16.474104 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:16.474294 master-0 kubenswrapper[28766]: I0318 09:04:16.474160 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68465463-5f2a-4e74-9c34-2706a185f7ea-hosts-file\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 09:04:16.474294 master-0 kubenswrapper[28766]: I0318 09:04:16.474199 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:16.474294 master-0 kubenswrapper[28766]: I0318 09:04:16.474269 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/68465463-5f2a-4e74-9c34-2706a185f7ea-hosts-file\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 09:04:16.474294 master-0 kubenswrapper[28766]: I0318 09:04:16.474275 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-etc-sysconfig\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.474470 master-0 kubenswrapper[28766]: I0318 09:04:16.474329 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.474470 master-0 kubenswrapper[28766]: I0318 09:04:16.474351 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-lib-modules\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.474470 master-0 kubenswrapper[28766]: I0318 09:04:16.474279 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-host-run-netns\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:16.474470 master-0 kubenswrapper[28766]: I0318 09:04:16.474436 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f826efe0-60a1-4465-b8d0-d4069ed507a1-var-lib-kubelet\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:16.474470 master-0 kubenswrapper[28766]: I0318 09:04:16.474449 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.474688 master-0 kubenswrapper[28766]: I0318 09:04:16.474491 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.474688 master-0 kubenswrapper[28766]: I0318 09:04:16.474569 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/43fbd379-dd1e-4287-bd76-fd3ec51cde43-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.474776 master-0 kubenswrapper[28766]: I0318 09:04:16.474741 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.475065 master-0 kubenswrapper[28766]: I0318 09:04:16.475011 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 09:04:16.519880 master-0 kubenswrapper[28766]: I0318 09:04:16.513911 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 09:04:16.546112 master-0 kubenswrapper[28766]: I0318 09:04:16.546041 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 09:04:16.546372 master-0 kubenswrapper[28766]: I0318 09:04:16.546248 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 09:04:16.552630 master-0 kubenswrapper[28766]: I0318 09:04:16.552577 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-key\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 09:04:16.554605 master-0 kubenswrapper[28766]: I0318 09:04:16.554554 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 09:04:16.554702 master-0 kubenswrapper[28766]: I0318 09:04:16.554627 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:16.569003 master-0 kubenswrapper[28766]: I0318 09:04:16.568932 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fa8f1797-0219-49fe-82b5-7416cc481c3a-signing-cabundle\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 09:04:16.575460 master-0 kubenswrapper[28766]: I0318 09:04:16.575419 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 09:04:16.576283 master-0 kubenswrapper[28766]: I0318 09:04:16.576234 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:16.576483 master-0 kubenswrapper[28766]: I0318 09:04:16.576440 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:16.577382 master-0 kubenswrapper[28766]: I0318 09:04:16.577120 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-wtmp\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.577382 master-0 kubenswrapper[28766]: I0318 09:04:16.577253 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-wtmp\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.577382 master-0 kubenswrapper[28766]: I0318 09:04:16.577351 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:16.577502 master-0 kubenswrapper[28766]: I0318 09:04:16.577436 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock\") pod \"installer-3-master-0\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:16.578002 master-0 kubenswrapper[28766]: I0318 09:04:16.577964 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-sys\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.578124 master-0 kubenswrapper[28766]: I0318 09:04:16.577963 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-sys\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.578185 master-0 kubenswrapper[28766]: I0318 09:04:16.578158 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-root\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.578237 master-0 kubenswrapper[28766]: I0318 09:04:16.578208 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/4146a62d-e37b-4295-90ca-b23f5e3d1112-root\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:16.594457 master-0 kubenswrapper[28766]: I0318 09:04:16.594339 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 09:04:16.601368 master-0 kubenswrapper[28766]: I0318 09:04:16.601326 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 09:04:16.603677 master-0 kubenswrapper[28766]: I0318 09:04:16.603630 28766 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="9e39226f66d3647b6d3e60dfa41a65af602b2c0ac717809011f105e2b66ccbc2" exitCode=255 Mar 18 09:04:16.603774 master-0 kubenswrapper[28766]: I0318 09:04:16.603714 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"9e39226f66d3647b6d3e60dfa41a65af602b2c0ac717809011f105e2b66ccbc2"} Mar 18 09:04:16.617027 master-0 kubenswrapper[28766]: I0318 09:04:16.616966 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 09:04:16.636108 master-0 kubenswrapper[28766]: I0318 09:04:16.635976 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 09:04:16.640779 master-0 kubenswrapper[28766]: I0318 09:04:16.640728 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-encryption-config\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.655686 master-0 kubenswrapper[28766]: I0318 09:04:16.655393 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 09:04:16.662350 master-0 kubenswrapper[28766]: I0318 09:04:16.662297 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-etcd-client\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.680574 master-0 kubenswrapper[28766]: I0318 09:04:16.680521 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 09:04:16.686133 master-0 kubenswrapper[28766]: I0318 09:04:16.686098 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2700f537-8f31-4380-a527-3e697a8122cc-serving-cert\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.706509 master-0 kubenswrapper[28766]: I0318 09:04:16.698444 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 09:04:16.706509 master-0 kubenswrapper[28766]: I0318 09:04:16.706474 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-serving-cert\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.718567 master-0 kubenswrapper[28766]: I0318 09:04:16.718297 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:04:16.727155 master-0 kubenswrapper[28766]: I0318 09:04:16.727101 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:16.736841 master-0 kubenswrapper[28766]: I0318 09:04:16.736778 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:04:16.748559 master-0 kubenswrapper[28766]: I0318 09:04:16.748517 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:16.762107 master-0 kubenswrapper[28766]: I0318 09:04:16.760390 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 09:04:16.763263 master-0 kubenswrapper[28766]: I0318 09:04:16.763229 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-audit-policies\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.779491 master-0 kubenswrapper[28766]: I0318 09:04:16.779440 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 09:04:16.797685 master-0 kubenswrapper[28766]: I0318 09:04:16.797612 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 09:04:16.799522 master-0 kubenswrapper[28766]: I0318 09:04:16.799483 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:16.817726 master-0 kubenswrapper[28766]: I0318 09:04:16.817497 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 09:04:16.839975 master-0 kubenswrapper[28766]: I0318 09:04:16.835933 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 09:04:16.839975 master-0 kubenswrapper[28766]: I0318 09:04:16.838467 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-trusted-ca-bundle\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.861782 master-0 kubenswrapper[28766]: I0318 09:04:16.861691 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 09:04:16.863312 master-0 kubenswrapper[28766]: I0318 09:04:16.863248 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2700f537-8f31-4380-a527-3e697a8122cc-etcd-serving-ca\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:16.874336 master-0 kubenswrapper[28766]: I0318 09:04:16.874019 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 09:04:16.895220 master-0 kubenswrapper[28766]: I0318 09:04:16.895170 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 09:04:16.915110 master-0 kubenswrapper[28766]: I0318 09:04:16.915065 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 09:04:16.921979 master-0 kubenswrapper[28766]: I0318 09:04:16.921818 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 09:04:16.931388 master-0 kubenswrapper[28766]: I0318 09:04:16.931357 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:16.935550 master-0 kubenswrapper[28766]: I0318 09:04:16.935522 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 09:04:16.942113 master-0 kubenswrapper[28766]: I0318 09:04:16.942078 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/998cabe9-d479-439f-b1c0-1d8c49aefeb9-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wkgdb\" (UID: \"998cabe9-d479-439f-b1c0-1d8c49aefeb9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 09:04:16.954991 master-0 kubenswrapper[28766]: I0318 09:04:16.954952 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 09:04:16.963482 master-0 kubenswrapper[28766]: I0318 09:04:16.963374 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-service-ca\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:16.982831 master-0 kubenswrapper[28766]: I0318 09:04:16.976875 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 09:04:16.999445 master-0 kubenswrapper[28766]: I0318 09:04:16.997073 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:04:17.002462 master-0 kubenswrapper[28766]: I0318 09:04:17.002393 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:17.014901 master-0 kubenswrapper[28766]: I0318 09:04:17.014748 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:04:17.021898 master-0 kubenswrapper[28766]: I0318 09:04:17.021836 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:17.036900 master-0 kubenswrapper[28766]: I0318 09:04:17.035586 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:04:17.055379 master-0 kubenswrapper[28766]: I0318 09:04:17.055294 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:04:17.079076 master-0 kubenswrapper[28766]: I0318 09:04:17.078996 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 09:04:17.099405 master-0 kubenswrapper[28766]: I0318 09:04:17.097195 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock\") pod \"e0d127be-2d13-449b-915b-2d49052baf02\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " Mar 18 09:04:17.099405 master-0 kubenswrapper[28766]: I0318 09:04:17.097365 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock" (OuterVolumeSpecName: "var-lock") pod "e0d127be-2d13-449b-915b-2d49052baf02" (UID: "e0d127be-2d13-449b-915b-2d49052baf02"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:17.099405 master-0 kubenswrapper[28766]: I0318 09:04:17.097482 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir\") pod \"e0d127be-2d13-449b-915b-2d49052baf02\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " Mar 18 09:04:17.099405 master-0 kubenswrapper[28766]: I0318 09:04:17.098514 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e0d127be-2d13-449b-915b-2d49052baf02" (UID: "e0d127be-2d13-449b-915b-2d49052baf02"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:17.099405 master-0 kubenswrapper[28766]: I0318 09:04:17.099201 28766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:17.099405 master-0 kubenswrapper[28766]: I0318 09:04:17.099225 28766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0d127be-2d13-449b-915b-2d49052baf02-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:17.101593 master-0 kubenswrapper[28766]: I0318 09:04:17.101553 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:04:17.110629 master-0 kubenswrapper[28766]: I0318 09:04:17.110580 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:17.117168 master-0 kubenswrapper[28766]: I0318 09:04:17.115928 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 09:04:17.134978 master-0 kubenswrapper[28766]: I0318 09:04:17.134921 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:04:17.174224 master-0 kubenswrapper[28766]: I0318 09:04:17.174185 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 09:04:17.178922 master-0 kubenswrapper[28766]: I0318 09:04:17.178475 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:04:17.185286 master-0 kubenswrapper[28766]: I0318 09:04:17.185256 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:04:17.188334 master-0 kubenswrapper[28766]: I0318 09:04:17.188291 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:17.191869 master-0 kubenswrapper[28766]: I0318 09:04:17.190317 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wkgdb" Mar 18 09:04:17.213376 master-0 kubenswrapper[28766]: I0318 09:04:17.213314 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:04:17.216096 master-0 kubenswrapper[28766]: I0318 09:04:17.216010 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 09:04:17.217889 master-0 kubenswrapper[28766]: I0318 09:04:17.217825 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:17.234514 master-0 kubenswrapper[28766]: I0318 09:04:17.234472 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 09:04:17.242531 master-0 kubenswrapper[28766]: I0318 09:04:17.242484 28766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 09:04:17.254356 master-0 kubenswrapper[28766]: I0318 09:04:17.254316 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 09:04:17.273425 master-0 kubenswrapper[28766]: I0318 09:04:17.273377 28766 request.go:700] Waited for 1.018575977s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&limit=500&resourceVersion=0 Mar 18 09:04:17.275377 master-0 kubenswrapper[28766]: I0318 09:04:17.275329 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 09:04:17.281718 master-0 kubenswrapper[28766]: I0318 09:04:17.281677 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-metrics-certs\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:17.294945 master-0 kubenswrapper[28766]: I0318 09:04:17.294773 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 09:04:17.302143 master-0 kubenswrapper[28766]: I0318 09:04:17.300188 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-default-certificate\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:17.318875 master-0 kubenswrapper[28766]: I0318 09:04:17.318822 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 09:04:17.329133 master-0 kubenswrapper[28766]: I0318 09:04:17.329088 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-stats-auth\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:17.335980 master-0 kubenswrapper[28766]: I0318 09:04:17.335936 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 09:04:17.340469 master-0 kubenswrapper[28766]: I0318 09:04:17.340421 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:17.345661 master-0 kubenswrapper[28766]: E0318 09:04:17.345617 28766 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.345756 master-0 kubenswrapper[28766]: E0318 09:04:17.345717 28766 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.345791 master-0 kubenswrapper[28766]: E0318 09:04:17.345756 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls podName:a7dab805-612b-404c-ab97-8cee927169db nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.845728351 +0000 UTC m=+10.859987027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls") pod "machine-config-daemon-qsj46" (UID: "a7dab805-612b-404c-ab97-8cee927169db") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.346180 master-0 kubenswrapper[28766]: E0318 09:04:17.345830 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images podName:b9768e50-c883-47b0-b319-851fa53ac19a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.845807313 +0000 UTC m=+10.860065979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images") pod "machine-api-operator-6fbb6cf6f9-z6nw9" (UID: "b9768e50-c883-47b0-b319-851fa53ac19a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.346180 master-0 kubenswrapper[28766]: E0318 09:04:17.345875 28766 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.346180 master-0 kubenswrapper[28766]: E0318 09:04:17.345900 28766 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.346180 master-0 kubenswrapper[28766]: E0318 09:04:17.345943 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images podName:97730ec2-e6f1-4f8c-b85c-3c10623d06ce nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.845932236 +0000 UTC m=+10.860191142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images") pod "cluster-baremetal-operator-6f69995874-cf6qn" (UID: "97730ec2-e6f1-4f8c-b85c-3c10623d06ce") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.346180 master-0 kubenswrapper[28766]: E0318 09:04:17.345964 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config podName:40f3b7a4-107c-4f1d-a3ab-b5d2309c373b nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.845953417 +0000 UTC m=+10.860212343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config") pod "machine-config-operator-84d549f6d5-4hj54" (UID: "40f3b7a4-107c-4f1d-a3ab-b5d2309c373b") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.346984 master-0 kubenswrapper[28766]: E0318 09:04:17.346952 28766 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.347051 master-0 kubenswrapper[28766]: E0318 09:04:17.347026 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs podName:3e96b35f-c57a-4e01-82f7-894ea16ac5b8 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.847006115 +0000 UTC m=+10.861264861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs") pod "machine-config-server-2jsz9" (UID: "3e96b35f-c57a-4e01-82f7-894ea16ac5b8") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.347611 master-0 kubenswrapper[28766]: E0318 09:04:17.347478 28766 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.347611 master-0 kubenswrapper[28766]: E0318 09:04:17.347537 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images podName:40f3b7a4-107c-4f1d-a3ab-b5d2309c373b nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.847525659 +0000 UTC m=+10.861784405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images") pod "machine-config-operator-84d549f6d5-4hj54" (UID: "40f3b7a4-107c-4f1d-a3ab-b5d2309c373b") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.347611 master-0 kubenswrapper[28766]: E0318 09:04:17.347557 28766 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.347611 master-0 kubenswrapper[28766]: E0318 09:04:17.347585 28766 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.347750 master-0 kubenswrapper[28766]: E0318 09:04:17.347631 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config podName:91a6fa86-8c58-43bc-a2d4-2b20901269f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.847599871 +0000 UTC m=+10.861858537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-dblgh" (UID: "91a6fa86-8c58-43bc-a2d4-2b20901269f7") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.347750 master-0 kubenswrapper[28766]: E0318 09:04:17.347653 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle podName:31a92270-efed-44fe-871e-90333235e85f nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.847645472 +0000 UTC m=+10.861904138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle") pod "insights-operator-68bf6ff9d6-kv7n5" (UID: "31a92270-efed-44fe-871e-90333235e85f") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.349446 master-0 kubenswrapper[28766]: E0318 09:04:17.349324 28766 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.349446 master-0 kubenswrapper[28766]: E0318 09:04:17.349345 28766 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.349446 master-0 kubenswrapper[28766]: E0318 09:04:17.349396 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca podName:e64ea71a-1e89-409a-9607-4d3cea093643 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.849375428 +0000 UTC m=+10.863634094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca") pod "cloud-credential-operator-744f9dbf77-v8ft8" (UID: "e64ea71a-1e89-409a-9607-4d3cea093643") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.349820 master-0 kubenswrapper[28766]: E0318 09:04:17.349467 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config podName:97730ec2-e6f1-4f8c-b85c-3c10623d06ce nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.84944102 +0000 UTC m=+10.863699696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config") pod "cluster-baremetal-operator-6f69995874-cf6qn" (UID: "97730ec2-e6f1-4f8c-b85c-3c10623d06ce") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.349820 master-0 kubenswrapper[28766]: E0318 09:04:17.349340 28766 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.349820 master-0 kubenswrapper[28766]: E0318 09:04:17.349513 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert podName:97730ec2-e6f1-4f8c-b85c-3c10623d06ce nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.849504982 +0000 UTC m=+10.863763658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert") pod "cluster-baremetal-operator-6f69995874-cf6qn" (UID: "97730ec2-e6f1-4f8c-b85c-3c10623d06ce") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350473 28766 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350526 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert podName:ffc5379c-651f-490c-90f4-1285b9093596 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.850515699 +0000 UTC m=+10.864774365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert") pod "cluster-autoscaler-operator-866dc4744-lxj7x" (UID: "ffc5379c-651f-490c-90f4-1285b9093596") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350530 28766 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350526 28766 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350587 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls podName:495e0cff-fca8-4dad-9247-2fc0e7ce86fc nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.85057044 +0000 UTC m=+10.864829206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls") pod "machine-approver-5c6485487f-87vpl" (UID: "495e0cff-fca8-4dad-9247-2fc0e7ce86fc") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350567 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350642 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config podName:495e0cff-fca8-4dad-9247-2fc0e7ce86fc nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.850622721 +0000 UTC m=+10.864881397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config") pod "machine-approver-5c6485487f-87vpl" (UID: "495e0cff-fca8-4dad-9247-2fc0e7ce86fc") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350648 28766 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350671 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca podName:91a6fa86-8c58-43bc-a2d4-2b20901269f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.850661863 +0000 UTC m=+10.864920539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca") pod "kube-state-metrics-7bbc969446-dblgh" (UID: "91a6fa86-8c58-43bc-a2d4-2b20901269f7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.350800 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config podName:336e741d-ac9a-4b94-9fbb-c9010e37c2d0 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.850773185 +0000 UTC m=+10.865031861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config") pod "machine-config-controller-b4f87c5b9-nm47n" (UID: "336e741d-ac9a-4b94-9fbb-c9010e37c2d0") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.351212 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.351261 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls podName:d71aa1b9-6eb5-4331-b959-8930e10817b4 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.851250358 +0000 UTC m=+10.865509024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-8kgdq" (UID: "d71aa1b9-6eb5-4331-b959-8930e10817b4") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.351259 28766 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.351281 28766 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.351311 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config podName:a7dab805-612b-404c-ab97-8cee927169db nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.85129954 +0000 UTC m=+10.865558216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config") pod "machine-config-daemon-qsj46" (UID: "a7dab805-612b-404c-ab97-8cee927169db") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.351804 master-0 kubenswrapper[28766]: E0318 09:04:17.351343 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert podName:fc5a9875-d97e-4371-a15d-a1f43b85abce nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.85132445 +0000 UTC m=+10.865583126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-7d87854d6-srhr6" (UID: "fc5a9875-d97e-4371-a15d-a1f43b85abce") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.352772 master-0 kubenswrapper[28766]: E0318 09:04:17.352580 28766 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.352772 master-0 kubenswrapper[28766]: E0318 09:04:17.352638 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume podName:b35ab145-16a7-4ef1-86e8-0afb6ff469fd nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.852625515 +0000 UTC m=+10.866884191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume") pod "dns-default-ck7b5" (UID: "b35ab145-16a7-4ef1-86e8-0afb6ff469fd") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.353346 master-0 kubenswrapper[28766]: E0318 09:04:17.353304 28766 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.353473 master-0 kubenswrapper[28766]: E0318 09:04:17.353445 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls podName:ccf74af5-d4fd-4ed3-9784-42397ea798c5 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.853355234 +0000 UTC m=+10.867614160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-9xtls" (UID: "ccf74af5-d4fd-4ed3-9784-42397ea798c5") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.356029 master-0 kubenswrapper[28766]: E0318 09:04:17.355981 28766 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.356029 master-0 kubenswrapper[28766]: E0318 09:04:17.356013 28766 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356041 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356066 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config podName:ccf74af5-d4fd-4ed3-9784-42397ea798c5 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.856049276 +0000 UTC m=+10.870307942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-9xtls" (UID: "ccf74af5-d4fd-4ed3-9784-42397ea798c5") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356079 28766 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356090 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert podName:1794b726-5c0d-4a72-8ddd-418a2cbd8ded nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.856078417 +0000 UTC m=+10.870337083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert") pod "packageserver-5f48d895dc-ttr9f" (UID: "1794b726-5c0d-4a72-8ddd-418a2cbd8ded") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356105 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356126 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap podName:91a6fa86-8c58-43bc-a2d4-2b20901269f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.856107778 +0000 UTC m=+10.870366454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-dblgh" (UID: "91a6fa86-8c58-43bc-a2d4-2b20901269f7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356138 28766 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356154 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca podName:d71aa1b9-6eb5-4331-b959-8930e10817b4 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.856138269 +0000 UTC m=+10.870397175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-8kgdq" (UID: "d71aa1b9-6eb5-4331-b959-8930e10817b4") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356186 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls podName:40f3b7a4-107c-4f1d-a3ab-b5d2309c373b nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.856170389 +0000 UTC m=+10.870429195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls") pod "machine-config-operator-84d549f6d5-4hj54" (UID: "40f3b7a4-107c-4f1d-a3ab-b5d2309c373b") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.356198 master-0 kubenswrapper[28766]: E0318 09:04:17.356209 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls podName:97730ec2-e6f1-4f8c-b85c-3c10623d06ce nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.85619987 +0000 UTC m=+10.870458786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-cf6qn" (UID: "97730ec2-e6f1-4f8c-b85c-3c10623d06ce") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.356962 master-0 kubenswrapper[28766]: I0318 09:04:17.356387 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 09:04:17.358984 master-0 kubenswrapper[28766]: E0318 09:04:17.358942 28766 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.359089 master-0 kubenswrapper[28766]: E0318 09:04:17.358994 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert podName:e64ea71a-1e89-409a-9607-4d3cea093643 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.858985575 +0000 UTC m=+10.873244471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-v8ft8" (UID: "e64ea71a-1e89-409a-9607-4d3cea093643") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.359089 master-0 kubenswrapper[28766]: E0318 09:04:17.359047 28766 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.359089 master-0 kubenswrapper[28766]: E0318 09:04:17.359064 28766 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.359089 master-0 kubenswrapper[28766]: E0318 09:04:17.359082 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls podName:91a6fa86-8c58-43bc-a2d4-2b20901269f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.859071597 +0000 UTC m=+10.873330393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-dblgh" (UID: "91a6fa86-8c58-43bc-a2d4-2b20901269f7") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.359347 master-0 kubenswrapper[28766]: E0318 09:04:17.359110 28766 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.359347 master-0 kubenswrapper[28766]: E0318 09:04:17.359111 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls podName:b35ab145-16a7-4ef1-86e8-0afb6ff469fd nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.859101218 +0000 UTC m=+10.873359884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls") pod "dns-default-ck7b5" (UID: "b35ab145-16a7-4ef1-86e8-0afb6ff469fd") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.359347 master-0 kubenswrapper[28766]: E0318 09:04:17.359154 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls podName:18921497-d8ed-42d8-bf3c-a027566ebe85 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.859148719 +0000 UTC m=+10.873407385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-swcvh" (UID: "18921497-d8ed-42d8-bf3c-a027566ebe85") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.361489 master-0 kubenswrapper[28766]: E0318 09:04:17.361441 28766 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.361489 master-0 kubenswrapper[28766]: E0318 09:04:17.361462 28766 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.361489 master-0 kubenswrapper[28766]: E0318 09:04:17.361494 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config podName:495e0cff-fca8-4dad-9247-2fc0e7ce86fc nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.861483021 +0000 UTC m=+10.875741687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config") pod "machine-approver-5c6485487f-87vpl" (UID: "495e0cff-fca8-4dad-9247-2fc0e7ce86fc") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.361707 master-0 kubenswrapper[28766]: E0318 09:04:17.361532 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token podName:3e96b35f-c57a-4e01-82f7-894ea16ac5b8 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.861512362 +0000 UTC m=+10.875771128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token") pod "machine-config-server-2jsz9" (UID: "3e96b35f-c57a-4e01-82f7-894ea16ac5b8") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.362845 master-0 kubenswrapper[28766]: E0318 09:04:17.362795 28766 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.362969 master-0 kubenswrapper[28766]: E0318 09:04:17.362882 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert podName:31a92270-efed-44fe-871e-90333235e85f nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.862842578 +0000 UTC m=+10.877101244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert") pod "insights-operator-68bf6ff9d6-kv7n5" (UID: "31a92270-efed-44fe-871e-90333235e85f") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.362969 master-0 kubenswrapper[28766]: E0318 09:04:17.362925 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.362969 master-0 kubenswrapper[28766]: E0318 09:04:17.362949 28766 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.362979 28766 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.362996 28766 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.362944 28766 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.363020 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert podName:1794b726-5c0d-4a72-8ddd-418a2cbd8ded nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.863006092 +0000 UTC m=+10.877264768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert") pod "packageserver-5f48d895dc-ttr9f" (UID: "1794b726-5c0d-4a72-8ddd-418a2cbd8ded") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.363047 28766 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.363054 28766 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.363059 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config podName:d71aa1b9-6eb5-4331-b959-8930e10817b4 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.863045933 +0000 UTC m=+10.877304609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-8kgdq" (UID: "d71aa1b9-6eb5-4331-b959-8930e10817b4") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.363092 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls podName:336e741d-ac9a-4b94-9fbb-c9010e37c2d0 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.863080524 +0000 UTC m=+10.877339200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls") pod "machine-config-controller-b4f87c5b9-nm47n" (UID: "336e741d-ac9a-4b94-9fbb-c9010e37c2d0") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.363102 28766 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.363180 master-0 kubenswrapper[28766]: E0318 09:04:17.363117 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config podName:b9768e50-c883-47b0-b319-851fa53ac19a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.863105455 +0000 UTC m=+10.877364141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config") pod "machine-api-operator-6fbb6cf6f9-z6nw9" (UID: "b9768e50-c883-47b0-b319-851fa53ac19a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.363736 master-0 kubenswrapper[28766]: E0318 09:04:17.363230 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle podName:31a92270-efed-44fe-871e-90333235e85f nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.863207617 +0000 UTC m=+10.877466483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle") pod "insights-operator-68bf6ff9d6-kv7n5" (UID: "31a92270-efed-44fe-871e-90333235e85f") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.363736 master-0 kubenswrapper[28766]: E0318 09:04:17.363256 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images podName:ccf74af5-d4fd-4ed3-9784-42397ea798c5 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.863243418 +0000 UTC m=+10.877502394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images") pod "cluster-cloud-controller-manager-operator-7dff898856-9xtls" (UID: "ccf74af5-d4fd-4ed3-9784-42397ea798c5") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.363736 master-0 kubenswrapper[28766]: E0318 09:04:17.363304 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls podName:b9768e50-c883-47b0-b319-851fa53ac19a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.86329188 +0000 UTC m=+10.877550816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-z6nw9" (UID: "b9768e50-c883-47b0-b319-851fa53ac19a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.363736 master-0 kubenswrapper[28766]: E0318 09:04:17.363325 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config podName:ffc5379c-651f-490c-90f4-1285b9093596 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.8633158 +0000 UTC m=+10.877574776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-lxj7x" (UID: "ffc5379c-651f-490c-90f4-1285b9093596") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.376230 master-0 kubenswrapper[28766]: I0318 09:04:17.376174 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 09:04:17.393878 master-0 kubenswrapper[28766]: I0318 09:04:17.393801 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtxm4" Mar 18 09:04:17.417024 master-0 kubenswrapper[28766]: I0318 09:04:17.416958 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 09:04:17.434635 master-0 kubenswrapper[28766]: I0318 09:04:17.434569 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 09:04:17.457544 master-0 kubenswrapper[28766]: I0318 09:04:17.457160 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 09:04:17.467706 master-0 kubenswrapper[28766]: E0318 09:04:17.467589 28766 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.467706 master-0 kubenswrapper[28766]: E0318 09:04:17.467649 28766 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.467932 master-0 kubenswrapper[28766]: E0318 09:04:17.467773 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert podName:d0272f7c-bedc-44cf-9790-88e10e6dda03 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.967740426 +0000 UTC m=+10.981999122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert") pod "ingress-canary-mpw9b" (UID: "d0272f7c-bedc-44cf-9790-88e10e6dda03") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.467932 master-0 kubenswrapper[28766]: E0318 09:04:17.467802 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.467932 master-0 kubenswrapper[28766]: E0318 09:04:17.467815 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.967795568 +0000 UTC m=+10.982054274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.467932 master-0 kubenswrapper[28766]: E0318 09:04:17.467894 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.967839739 +0000 UTC m=+10.982098645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.468422 master-0 kubenswrapper[28766]: E0318 09:04:17.468361 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.468667 master-0 kubenswrapper[28766]: E0318 09:04:17.468645 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle podName:5320a1da-262a-4b1b-93b4-1df9d4c26eec nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.96861602 +0000 UTC m=+10.982874866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle") pod "metrics-server-59f88c66c8-z4c2f" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.470105 master-0 kubenswrapper[28766]: E0318 09:04:17.470077 28766 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.470220 master-0 kubenswrapper[28766]: E0318 09:04:17.470144 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls podName:5320a1da-262a-4b1b-93b4-1df9d4c26eec nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.97013301 +0000 UTC m=+10.984391666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls") pod "metrics-server-59f88c66c8-z4c2f" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.470220 master-0 kubenswrapper[28766]: E0318 09:04:17.470175 28766 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.470220 master-0 kubenswrapper[28766]: E0318 09:04:17.470199 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.970192982 +0000 UTC m=+10.984451648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.470538 master-0 kubenswrapper[28766]: E0318 09:04:17.470435 28766 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-as91djiheslg2: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.470538 master-0 kubenswrapper[28766]: E0318 09:04:17.470500 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle podName:5320a1da-262a-4b1b-93b4-1df9d4c26eec nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.97048569 +0000 UTC m=+10.984744366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle") pod "metrics-server-59f88c66c8-z4c2f" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.471700 master-0 kubenswrapper[28766]: E0318 09:04:17.471659 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.471790 master-0 kubenswrapper[28766]: E0318 09:04:17.471706 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.971694642 +0000 UTC m=+10.985953308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.471790 master-0 kubenswrapper[28766]: E0318 09:04:17.471727 28766 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.471790 master-0 kubenswrapper[28766]: E0318 09:04:17.471751 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.971745573 +0000 UTC m=+10.986004239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.471790 master-0 kubenswrapper[28766]: E0318 09:04:17.471775 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.472097 master-0 kubenswrapper[28766]: E0318 09:04:17.471799 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles podName:5320a1da-262a-4b1b-93b4-1df9d4c26eec nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.971790674 +0000 UTC m=+10.986049340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles") pod "metrics-server-59f88c66c8-z4c2f" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.472097 master-0 kubenswrapper[28766]: E0318 09:04:17.471932 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.472097 master-0 kubenswrapper[28766]: E0318 09:04:17.471968 28766 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.472097 master-0 kubenswrapper[28766]: E0318 09:04:17.472014 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca podName:06cbd48a-1f1d-4734-8d57-e1b6824879b6 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.97199727 +0000 UTC m=+10.986255976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca") pod "openshift-state-metrics-5dc6c74576-dsq5f" (UID: "06cbd48a-1f1d-4734-8d57-e1b6824879b6") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.472097 master-0 kubenswrapper[28766]: E0318 09:04:17.472074 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.972051211 +0000 UTC m=+10.986309887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.472454 master-0 kubenswrapper[28766]: E0318 09:04:17.472366 28766 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.472454 master-0 kubenswrapper[28766]: E0318 09:04:17.472430 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls podName:4146a62d-e37b-4295-90ca-b23f5e3d1112 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.972411791 +0000 UTC m=+10.986670467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls") pod "node-exporter-75szk" (UID: "4146a62d-e37b-4295-90ca-b23f5e3d1112") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.473902 master-0 kubenswrapper[28766]: E0318 09:04:17.473220 28766 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.473902 master-0 kubenswrapper[28766]: E0318 09:04:17.473270 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config podName:06cbd48a-1f1d-4734-8d57-e1b6824879b6 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.973258584 +0000 UTC m=+10.987517240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-5dc6c74576-dsq5f" (UID: "06cbd48a-1f1d-4734-8d57-e1b6824879b6") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.474587 master-0 kubenswrapper[28766]: E0318 09:04:17.474536 28766 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.474680 master-0 kubenswrapper[28766]: E0318 09:04:17.474620 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config podName:4146a62d-e37b-4295-90ca-b23f5e3d1112 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.974602159 +0000 UTC m=+10.988861045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config") pod "node-exporter-75szk" (UID: "4146a62d-e37b-4295-90ca-b23f5e3d1112") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.474745 master-0 kubenswrapper[28766]: E0318 09:04:17.474706 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.474795 master-0 kubenswrapper[28766]: E0318 09:04:17.474763 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.974748243 +0000 UTC m=+10.989007109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.474868 master-0 kubenswrapper[28766]: E0318 09:04:17.474816 28766 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.474868 master-0 kubenswrapper[28766]: E0318 09:04:17.474812 28766 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.474967 master-0 kubenswrapper[28766]: E0318 09:04:17.474894 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls podName:06cbd48a-1f1d-4734-8d57-e1b6824879b6 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.974882537 +0000 UTC m=+10.989141203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-dsq5f" (UID: "06cbd48a-1f1d-4734-8d57-e1b6824879b6") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.474967 master-0 kubenswrapper[28766]: E0318 09:04:17.474818 28766 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.474967 master-0 kubenswrapper[28766]: E0318 09:04:17.474914 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs podName:5320a1da-262a-4b1b-93b4-1df9d4c26eec nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.974906047 +0000 UTC m=+10.989164713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs") pod "metrics-server-59f88c66c8-z4c2f" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.474967 master-0 kubenswrapper[28766]: E0318 09:04:17.474823 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.474967 master-0 kubenswrapper[28766]: E0318 09:04:17.474956 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs podName:e0bb044f-5a4e-4981-8084-91348ce1a56a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.974934288 +0000 UTC m=+10.989193194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs") pod "multus-admission-controller-58c9f8fc64-zgrts" (UID: "e0bb044f-5a4e-4981-8084-91348ce1a56a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:17.475197 master-0 kubenswrapper[28766]: E0318 09:04:17.474993 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca podName:4146a62d-e37b-4295-90ca-b23f5e3d1112 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:17.974977729 +0000 UTC m=+10.989236615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca") pod "node-exporter-75szk" (UID: "4146a62d-e37b-4295-90ca-b23f5e3d1112") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:17.475714 master-0 kubenswrapper[28766]: I0318 09:04:17.475680 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 09:04:17.500692 master-0 kubenswrapper[28766]: I0318 09:04:17.500613 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 09:04:17.514834 master-0 kubenswrapper[28766]: I0318 09:04:17.514759 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 09:04:17.538143 master-0 kubenswrapper[28766]: I0318 09:04:17.538096 28766 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:04:17.541423 master-0 kubenswrapper[28766]: I0318 09:04:17.541377 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:04:17.541571 master-0 kubenswrapper[28766]: I0318 09:04:17.541444 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:04:17.541571 master-0 kubenswrapper[28766]: I0318 09:04:17.541456 28766 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:04:17.542017 master-0 kubenswrapper[28766]: I0318 09:04:17.541983 28766 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:04:17.554968 master-0 kubenswrapper[28766]: I0318 09:04:17.554926 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 09:04:17.573960 master-0 kubenswrapper[28766]: I0318 09:04:17.573904 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 09:04:17.597752 master-0 kubenswrapper[28766]: I0318 09:04:17.597659 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 09:04:17.612999 master-0 kubenswrapper[28766]: I0318 09:04:17.612929 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:04:17.614964 master-0 kubenswrapper[28766]: I0318 09:04:17.613920 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 09:04:17.637403 master-0 kubenswrapper[28766]: I0318 09:04:17.637306 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 09:04:17.656308 master-0 kubenswrapper[28766]: I0318 09:04:17.656237 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 09:04:17.675739 master-0 kubenswrapper[28766]: I0318 09:04:17.675680 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 09:04:17.696255 master-0 kubenswrapper[28766]: I0318 09:04:17.696132 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bpz6r" Mar 18 09:04:17.716699 master-0 kubenswrapper[28766]: I0318 09:04:17.716644 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-z6dpv" Mar 18 09:04:17.735292 master-0 kubenswrapper[28766]: I0318 09:04:17.735235 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 09:04:17.755151 master-0 kubenswrapper[28766]: I0318 09:04:17.755068 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 09:04:17.774479 master-0 kubenswrapper[28766]: I0318 09:04:17.774394 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 09:04:17.793825 master-0 kubenswrapper[28766]: I0318 09:04:17.793749 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-74fh5" Mar 18 09:04:17.814667 master-0 kubenswrapper[28766]: I0318 09:04:17.814617 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 09:04:17.835284 master-0 kubenswrapper[28766]: I0318 09:04:17.835198 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 09:04:17.854626 master-0 kubenswrapper[28766]: I0318 09:04:17.854257 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-59m7s" Mar 18 09:04:17.874529 master-0 kubenswrapper[28766]: I0318 09:04:17.874457 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jr5t6" Mar 18 09:04:17.896460 master-0 kubenswrapper[28766]: I0318 09:04:17.895819 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-6tztw" Mar 18 09:04:17.915642 master-0 kubenswrapper[28766]: I0318 09:04:17.915553 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 09:04:17.931312 master-0 kubenswrapper[28766]: I0318 09:04:17.931210 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:17.931312 master-0 kubenswrapper[28766]: I0318 09:04:17.931271 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 09:04:17.931312 master-0 kubenswrapper[28766]: I0318 09:04:17.931335 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931382 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931463 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931516 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931542 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931596 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931631 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931695 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931735 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931793 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.931812 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.932411 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.932559 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-config-volume\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.932657 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.932727 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933043 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffc5379c-651f-490c-90f4-1285b9093596-cert\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933070 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933127 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933144 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933188 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933210 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc5a9875-d97e-4371-a15d-a1f43b85abce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933234 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933318 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-webhook-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933376 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933445 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e64ea71a-1e89-409a-9607-4d3cea093643-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933464 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-metrics-tls\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933497 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933541 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/18921497-d8ed-42d8-bf3c-a027566ebe85-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 09:04:17.933542 master-0 kubenswrapper[28766]: I0318 09:04:17.933573 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.933665 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.933744 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.933772 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.933907 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.933973 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.933969 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc5379c-651f-490c-90f4-1285b9093596-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934067 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934183 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934190 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934484 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-config\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934499 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934540 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-apiservice-cert\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934623 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934696 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934766 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.934901 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b9768e50-c883-47b0-b319-851fa53ac19a-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935108 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935143 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-images\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935221 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935271 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935330 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935375 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935394 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-config\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935471 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-cert\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935475 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935563 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9768e50-c883-47b0-b319-851fa53ac19a-images\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935597 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935666 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:17.936396 master-0 kubenswrapper[28766]: I0318 09:04:17.935771 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e64ea71a-1e89-409a-9607-4d3cea093643-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 09:04:17.954660 master-0 kubenswrapper[28766]: I0318 09:04:17.954343 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 09:04:17.964339 master-0 kubenswrapper[28766]: I0318 09:04:17.964279 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-machine-approver-tls\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:17.977760 master-0 kubenswrapper[28766]: I0318 09:04:17.977641 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 09:04:17.986188 master-0 kubenswrapper[28766]: I0318 09:04:17.986067 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7dab805-612b-404c-ab97-8cee927169db-proxy-tls\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:17.995631 master-0 kubenswrapper[28766]: I0318 09:04:17.995522 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-222ht" Mar 18 09:04:18.015749 master-0 kubenswrapper[28766]: I0318 09:04:18.015686 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-2zcks" Mar 18 09:04:18.034173 master-0 kubenswrapper[28766]: I0318 09:04:18.034125 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 09:04:18.037066 master-0 kubenswrapper[28766]: I0318 09:04:18.036996 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.037267 master-0 kubenswrapper[28766]: I0318 09:04:18.037226 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:18.037396 master-0 kubenswrapper[28766]: I0318 09:04:18.037358 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:18.037454 master-0 kubenswrapper[28766]: I0318 09:04:18.037405 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.037500 master-0 kubenswrapper[28766]: I0318 09:04:18.037457 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.037592 master-0 kubenswrapper[28766]: I0318 09:04:18.037566 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:18.037671 master-0 kubenswrapper[28766]: I0318 09:04:18.037599 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:18.037723 master-0 kubenswrapper[28766]: I0318 09:04:18.037687 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:18.037723 master-0 kubenswrapper[28766]: I0318 09:04:18.037714 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.037834 master-0 kubenswrapper[28766]: I0318 09:04:18.037797 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:18.037908 master-0 kubenswrapper[28766]: I0318 09:04:18.037838 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:18.037993 master-0 kubenswrapper[28766]: I0318 09:04:18.037967 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:18.038069 master-0 kubenswrapper[28766]: I0318 09:04:18.038041 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:18.038118 master-0 kubenswrapper[28766]: I0318 09:04:18.038080 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 09:04:18.038118 master-0 kubenswrapper[28766]: I0318 09:04:18.038113 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.038282 master-0 kubenswrapper[28766]: I0318 09:04:18.038242 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:18.038282 master-0 kubenswrapper[28766]: I0318 09:04:18.038279 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:18.038385 master-0 kubenswrapper[28766]: I0318 09:04:18.038360 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:18.038429 master-0 kubenswrapper[28766]: I0318 09:04:18.038408 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:18.038497 master-0 kubenswrapper[28766]: I0318 09:04:18.038471 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 09:04:18.039669 master-0 kubenswrapper[28766]: I0318 09:04:18.039605 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:18.042983 master-0 kubenswrapper[28766]: I0318 09:04:18.042936 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7dab805-612b-404c-ab97-8cee927169db-mcd-auth-proxy-config\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:18.043073 master-0 kubenswrapper[28766]: I0318 09:04:18.042982 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 09:04:18.054190 master-0 kubenswrapper[28766]: I0318 09:04:18.054132 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 09:04:18.075412 master-0 kubenswrapper[28766]: I0318 09:04:18.075357 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 09:04:18.096752 master-0 kubenswrapper[28766]: I0318 09:04:18.096688 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 09:04:18.106686 master-0 kubenswrapper[28766]: I0318 09:04:18.106611 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-images\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:18.115386 master-0 kubenswrapper[28766]: I0318 09:04:18.115332 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 09:04:18.124241 master-0 kubenswrapper[28766]: I0318 09:04:18.124185 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-node-bootstrap-token\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 09:04:18.138006 master-0 kubenswrapper[28766]: I0318 09:04:18.137915 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 09:04:18.145401 master-0 kubenswrapper[28766]: I0318 09:04:18.145327 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-certs\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 09:04:18.154877 master-0 kubenswrapper[28766]: I0318 09:04:18.154649 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-jdt5h" Mar 18 09:04:18.174488 master-0 kubenswrapper[28766]: I0318 09:04:18.174425 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 09:04:18.182903 master-0 kubenswrapper[28766]: I0318 09:04:18.182805 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:18.197456 master-0 kubenswrapper[28766]: I0318 09:04:18.197392 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 09:04:18.204709 master-0 kubenswrapper[28766]: I0318 09:04:18.204639 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ccf74af5-d4fd-4ed3-9784-42397ea798c5-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:18.217768 master-0 kubenswrapper[28766]: I0318 09:04:18.216350 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 09:04:18.234887 master-0 kubenswrapper[28766]: I0318 09:04:18.234701 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-jzd99" Mar 18 09:04:18.256206 master-0 kubenswrapper[28766]: I0318 09:04:18.256081 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 09:04:18.263407 master-0 kubenswrapper[28766]: I0318 09:04:18.263361 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:18.274958 master-0 kubenswrapper[28766]: I0318 09:04:18.274905 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 09:04:18.292694 master-0 kubenswrapper[28766]: I0318 09:04:18.292639 28766 request.go:700] Waited for 2.003060105s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0 Mar 18 09:04:18.294146 master-0 kubenswrapper[28766]: I0318 09:04:18.294120 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 09:04:18.295770 master-0 kubenswrapper[28766]: I0318 09:04:18.294660 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:18.313983 master-0 kubenswrapper[28766]: I0318 09:04:18.313927 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kmxfz" Mar 18 09:04:18.335301 master-0 kubenswrapper[28766]: I0318 09:04:18.335219 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 09:04:18.344697 master-0 kubenswrapper[28766]: I0318 09:04:18.344625 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:18.356125 master-0 kubenswrapper[28766]: I0318 09:04:18.356071 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 09:04:18.374554 master-0 kubenswrapper[28766]: I0318 09:04:18.374502 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 09:04:18.384797 master-0 kubenswrapper[28766]: I0318 09:04:18.384734 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:18.396015 master-0 kubenswrapper[28766]: I0318 09:04:18.395949 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-zr4v5" Mar 18 09:04:18.414614 master-0 kubenswrapper[28766]: I0318 09:04:18.414559 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-s7cph" Mar 18 09:04:18.435081 master-0 kubenswrapper[28766]: I0318 09:04:18.435012 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-68m6c" Mar 18 09:04:18.455157 master-0 kubenswrapper[28766]: I0318 09:04:18.455098 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 09:04:18.464521 master-0 kubenswrapper[28766]: I0318 09:04:18.464456 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a92270-efed-44fe-871e-90333235e85f-serving-cert\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:18.475434 master-0 kubenswrapper[28766]: I0318 09:04:18.475366 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 09:04:18.484791 master-0 kubenswrapper[28766]: I0318 09:04:18.484723 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 09:04:18.494530 master-0 kubenswrapper[28766]: I0318 09:04:18.494465 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 09:04:18.503756 master-0 kubenswrapper[28766]: I0318 09:04:18.503686 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d71aa1b9-6eb5-4331-b959-8930e10817b4-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:18.516405 master-0 kubenswrapper[28766]: I0318 09:04:18.516262 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 09:04:18.523746 master-0 kubenswrapper[28766]: I0318 09:04:18.523687 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ccf74af5-d4fd-4ed3-9784-42397ea798c5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:18.534696 master-0 kubenswrapper[28766]: I0318 09:04:18.534657 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2w2dp" Mar 18 09:04:18.554127 master-0 kubenswrapper[28766]: I0318 09:04:18.554069 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:04:18.575597 master-0 kubenswrapper[28766]: I0318 09:04:18.575540 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 09:04:18.579114 master-0 kubenswrapper[28766]: I0318 09:04:18.579068 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06cbd48a-1f1d-4734-8d57-e1b6824879b6-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:18.580129 master-0 kubenswrapper[28766]: I0318 09:04:18.580089 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4146a62d-e37b-4295-90ca-b23f5e3d1112-metrics-client-ca\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:18.580206 master-0 kubenswrapper[28766]: I0318 09:04:18.580168 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-metrics-client-ca\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:18.583405 master-0 kubenswrapper[28766]: I0318 09:04:18.583368 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:18.583405 master-0 kubenswrapper[28766]: I0318 09:04:18.583388 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d71aa1b9-6eb5-4331-b959-8930e10817b4-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:18.594585 master-0 kubenswrapper[28766]: I0318 09:04:18.594527 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 09:04:18.603651 master-0 kubenswrapper[28766]: I0318 09:04:18.603603 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-auth-proxy-config\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:18.622829 master-0 kubenswrapper[28766]: I0318 09:04:18.622764 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 09:04:18.626220 master-0 kubenswrapper[28766]: I0318 09:04:18.626162 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a92270-efed-44fe-871e-90333235e85f-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:18.634238 master-0 kubenswrapper[28766]: I0318 09:04:18.634185 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:04:18.657378 master-0 kubenswrapper[28766]: I0318 09:04:18.657327 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9s5l6" Mar 18 09:04:18.674455 master-0 kubenswrapper[28766]: I0318 09:04:18.674375 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 09:04:18.676056 master-0 kubenswrapper[28766]: I0318 09:04:18.675978 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:18.695977 master-0 kubenswrapper[28766]: I0318 09:04:18.694450 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 09:04:18.703978 master-0 kubenswrapper[28766]: I0318 09:04:18.703922 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:18.733889 master-0 kubenswrapper[28766]: I0318 09:04:18.733230 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vc9fv" Mar 18 09:04:18.737886 master-0 kubenswrapper[28766]: I0318 09:04:18.736021 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 09:04:18.741881 master-0 kubenswrapper[28766]: I0318 09:04:18.739598 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:18.763873 master-0 kubenswrapper[28766]: I0318 09:04:18.760259 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 09:04:18.770422 master-0 kubenswrapper[28766]: I0318 09:04:18.770327 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06cbd48a-1f1d-4734-8d57-e1b6824879b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:18.773819 master-0 kubenswrapper[28766]: I0318 09:04:18.773775 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 09:04:18.779409 master-0 kubenswrapper[28766]: I0318 09:04:18.779369 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-tls\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:18.793905 master-0 kubenswrapper[28766]: I0318 09:04:18.793869 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-2wdmv" Mar 18 09:04:18.813967 master-0 kubenswrapper[28766]: I0318 09:04:18.813924 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 09:04:18.819618 master-0 kubenswrapper[28766]: I0318 09:04:18.819570 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4146a62d-e37b-4295-90ca-b23f5e3d1112-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:18.834940 master-0 kubenswrapper[28766]: I0318 09:04:18.834878 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 09:04:18.839387 master-0 kubenswrapper[28766]: I0318 09:04:18.839334 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.855068 master-0 kubenswrapper[28766]: I0318 09:04:18.855017 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 09:04:18.859507 master-0 kubenswrapper[28766]: I0318 09:04:18.859345 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.874731 master-0 kubenswrapper[28766]: I0318 09:04:18.874669 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-gpcfv" Mar 18 09:04:18.894489 master-0 kubenswrapper[28766]: I0318 09:04:18.894425 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 09:04:18.899252 master-0 kubenswrapper[28766]: I0318 09:04:18.899208 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.914454 master-0 kubenswrapper[28766]: I0318 09:04:18.914407 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-as91djiheslg2" Mar 18 09:04:18.919917 master-0 kubenswrapper[28766]: I0318 09:04:18.919875 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.933048 master-0 kubenswrapper[28766]: E0318 09:04:18.933000 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:18.933638 master-0 kubenswrapper[28766]: E0318 09:04:18.933115 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap podName:91a6fa86-8c58-43bc-a2d4-2b20901269f7 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:19.933073534 +0000 UTC m=+12.947332200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-dblgh" (UID: "91a6fa86-8c58-43bc-a2d4-2b20901269f7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:18.934254 master-0 kubenswrapper[28766]: I0318 09:04:18.934139 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 09:04:18.939912 master-0 kubenswrapper[28766]: I0318 09:04:18.939842 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:18.954216 master-0 kubenswrapper[28766]: I0318 09:04:18.954155 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 09:04:18.974914 master-0 kubenswrapper[28766]: I0318 09:04:18.974834 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-k5mpr" Mar 18 09:04:18.995428 master-0 kubenswrapper[28766]: I0318 09:04:18.995363 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 09:04:19.002510 master-0 kubenswrapper[28766]: I0318 09:04:19.002424 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0272f7c-bedc-44cf-9790-88e10e6dda03-cert\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 09:04:19.014371 master-0 kubenswrapper[28766]: I0318 09:04:19.014285 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 09:04:19.034723 master-0 kubenswrapper[28766]: I0318 09:04:19.034568 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 09:04:19.039361 master-0 kubenswrapper[28766]: E0318 09:04:19.039304 28766 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039665 master-0 kubenswrapper[28766]: E0318 09:04:19.039608 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:20.039579895 +0000 UTC m=+13.053838571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039665 master-0 kubenswrapper[28766]: E0318 09:04:19.039645 28766 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039751 master-0 kubenswrapper[28766]: E0318 09:04:19.039676 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:20.039667198 +0000 UTC m=+13.053925884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039751 master-0 kubenswrapper[28766]: E0318 09:04:19.039697 28766 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039751 master-0 kubenswrapper[28766]: E0318 09:04:19.039728 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:20.039720949 +0000 UTC m=+13.053979625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039845 master-0 kubenswrapper[28766]: E0318 09:04:19.039759 28766 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039845 master-0 kubenswrapper[28766]: E0318 09:04:19.039789 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs podName:e0bb044f-5a4e-4981-8084-91348ce1a56a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:20.039780841 +0000 UTC m=+13.054039527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs") pod "multus-admission-controller-58c9f8fc64-zgrts" (UID: "e0bb044f-5a4e-4981-8084-91348ce1a56a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039845 master-0 kubenswrapper[28766]: E0318 09:04:19.039786 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:19.039845 master-0 kubenswrapper[28766]: E0318 09:04:19.039818 28766 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.039845 master-0 kubenswrapper[28766]: E0318 09:04:19.039877 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:20.039843742 +0000 UTC m=+13.054102418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:04:19.040331 master-0 kubenswrapper[28766]: E0318 09:04:19.039939 28766 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:19.040331 master-0 kubenswrapper[28766]: E0318 09:04:19.039974 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:20.039937165 +0000 UTC m=+13.054195871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:19.040331 master-0 kubenswrapper[28766]: E0318 09:04:19.040023 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle podName:e5ae1886-f90c-49f4-bf08-055b55dd785a nodeName:}" failed. No retries permitted until 2026-03-18 09:04:20.040009627 +0000 UTC m=+13.054268303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle") pod "telemeter-client-5d4d5995f-s5dw8" (UID: "e5ae1886-f90c-49f4-bf08-055b55dd785a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:04:19.054455 master-0 kubenswrapper[28766]: I0318 09:04:19.054424 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 09:04:19.074743 master-0 kubenswrapper[28766]: I0318 09:04:19.074707 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-svhdx" Mar 18 09:04:19.093829 master-0 kubenswrapper[28766]: I0318 09:04:19.093794 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 09:04:19.121605 master-0 kubenswrapper[28766]: I0318 09:04:19.121554 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 09:04:19.133813 master-0 kubenswrapper[28766]: I0318 09:04:19.133775 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 09:04:19.154570 master-0 kubenswrapper[28766]: I0318 09:04:19.154529 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 09:04:19.174039 master-0 kubenswrapper[28766]: I0318 09:04:19.173972 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 09:04:19.194402 master-0 kubenswrapper[28766]: I0318 09:04:19.194353 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-xs8t8" Mar 18 09:04:19.216484 master-0 kubenswrapper[28766]: I0318 09:04:19.214915 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 09:04:19.246053 master-0 kubenswrapper[28766]: I0318 09:04:19.246001 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9qkd\" (UniqueName: \"kubernetes.io/projected/ccf74af5-d4fd-4ed3-9784-42397ea798c5-kube-api-access-p9qkd\") pod \"cluster-cloud-controller-manager-operator-7dff898856-9xtls\" (UID: \"ccf74af5-d4fd-4ed3-9784-42397ea798c5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-9xtls" Mar 18 09:04:19.268748 master-0 kubenswrapper[28766]: I0318 09:04:19.268693 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwg9\" (UniqueName: \"kubernetes.io/projected/f9fa104a-4979-4023-8d7e-a965f11bc7db-kube-api-access-jlwg9\") pod \"multus-additional-cni-plugins-xpzrz\" (UID: \"f9fa104a-4979-4023-8d7e-a965f11bc7db\") " pod="openshift-multus/multus-additional-cni-plugins-xpzrz" Mar 18 09:04:19.297411 master-0 kubenswrapper[28766]: I0318 09:04:19.297297 28766 request.go:700] Waited for 2.950028211s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Mar 18 09:04:19.300437 master-0 kubenswrapper[28766]: I0318 09:04:19.300376 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxfc\" (UniqueName: \"kubernetes.io/projected/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-api-access-rpxfc\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:19.311775 master-0 kubenswrapper[28766]: I0318 09:04:19.311716 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfzdk\" (UniqueName: \"kubernetes.io/projected/e025d334-20e7-491f-8027-194251398747-kube-api-access-bfzdk\") pod \"dns-operator-9c5679d8f-b9pn7\" (UID: \"e025d334-20e7-491f-8027-194251398747\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-b9pn7" Mar 18 09:04:19.326084 master-0 kubenswrapper[28766]: I0318 09:04:19.326030 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9rk\" (UniqueName: \"kubernetes.io/projected/97730ec2-e6f1-4f8c-b85c-3c10623d06ce-kube-api-access-zj9rk\") pod \"cluster-baremetal-operator-6f69995874-cf6qn\" (UID: \"97730ec2-e6f1-4f8c-b85c-3c10623d06ce\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-cf6qn" Mar 18 09:04:19.346093 master-0 kubenswrapper[28766]: I0318 09:04:19.346029 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8prf\" (UniqueName: \"kubernetes.io/projected/fcf89a76-7a94-46d3-853e-68e986563764-kube-api-access-s8prf\") pod \"openshift-apiserver-operator-d65958b8-w4t7x\" (UID: \"fcf89a76-7a94-46d3-853e-68e986563764\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-w4t7x" Mar 18 09:04:19.366322 master-0 kubenswrapper[28766]: I0318 09:04:19.366277 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsc6v\" (UniqueName: \"kubernetes.io/projected/f650e6f0-fb74-4083-a7a9-fa4df513108f-kube-api-access-tsc6v\") pod \"network-check-source-b4bf74f6-7z5jl\" (UID: \"f650e6f0-fb74-4083-a7a9-fa4df513108f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7z5jl" Mar 18 09:04:19.397229 master-0 kubenswrapper[28766]: I0318 09:04:19.397177 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjmx\" (UniqueName: \"kubernetes.io/projected/772bc250-2e57-4ce0-883c-d44281fcb0be-kube-api-access-dfjmx\") pod \"openshift-controller-manager-operator-8c94f4649-r758j\" (UID: \"772bc250-2e57-4ce0-883c-d44281fcb0be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-r758j" Mar 18 09:04:19.405832 master-0 kubenswrapper[28766]: I0318 09:04:19.405794 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbsgx\" (UniqueName: \"kubernetes.io/projected/33a5c021-23c3-4a97-b5f3-77fd6dcba1ab-kube-api-access-fbsgx\") pod \"operator-controller-controller-manager-57777556ff-chjqr\" (UID: \"33a5c021-23c3-4a97-b5f3-77fd6dcba1ab\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:19.426898 master-0 kubenswrapper[28766]: I0318 09:04:19.426772 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwsfl\" (UniqueName: \"kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl\") pod \"route-controller-manager-75749f878-qxnvp\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:19.446424 master-0 kubenswrapper[28766]: I0318 09:04:19.446324 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjq4w\" (UniqueName: \"kubernetes.io/projected/1794b726-5c0d-4a72-8ddd-418a2cbd8ded-kube-api-access-gjq4w\") pod \"packageserver-5f48d895dc-ttr9f\" (UID: \"1794b726-5c0d-4a72-8ddd-418a2cbd8ded\") " pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:19.465826 master-0 kubenswrapper[28766]: I0318 09:04:19.465766 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lsw9\" (UniqueName: \"kubernetes.io/projected/bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a-kube-api-access-8lsw9\") pod \"cluster-node-tuning-operator-598fbc5f8f-tj9b9\" (UID: \"bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-tj9b9" Mar 18 09:04:19.488115 master-0 kubenswrapper[28766]: I0318 09:04:19.488057 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9w7l\" (UniqueName: \"kubernetes.io/projected/16d633c5-e0aa-4fb6-83e0-a2e976334406-kube-api-access-x9w7l\") pod \"network-node-identity-n5vqx\" (UID: \"16d633c5-e0aa-4fb6-83e0-a2e976334406\") " pod="openshift-network-node-identity/network-node-identity-n5vqx" Mar 18 09:04:19.518340 master-0 kubenswrapper[28766]: I0318 09:04:19.518293 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjrfz\" (UniqueName: \"kubernetes.io/projected/a7dab805-612b-404c-ab97-8cee927169db-kube-api-access-pjrfz\") pod \"machine-config-daemon-qsj46\" (UID: \"a7dab805-612b-404c-ab97-8cee927169db\") " pod="openshift-machine-config-operator/machine-config-daemon-qsj46" Mar 18 09:04:19.540716 master-0 kubenswrapper[28766]: I0318 09:04:19.540647 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njbjp\" (UniqueName: \"kubernetes.io/projected/fa8f1797-0219-49fe-82b5-7416cc481c3a-kube-api-access-njbjp\") pod \"service-ca-79bc6b8d76-5jj7d\" (UID: \"fa8f1797-0219-49fe-82b5-7416cc481c3a\") " pod="openshift-service-ca/service-ca-79bc6b8d76-5jj7d" Mar 18 09:04:19.547371 master-0 kubenswrapper[28766]: I0318 09:04:19.547290 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlp7w\" (UniqueName: \"kubernetes.io/projected/59d50dd5-6793-4f96-a769-31e086ecc7e4-kube-api-access-mlp7w\") pod \"package-server-manager-7b95f86987-q8ff6\" (UID: \"59d50dd5-6793-4f96-a769-31e086ecc7e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 09:04:19.573260 master-0 kubenswrapper[28766]: I0318 09:04:19.573109 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n959l\" (UniqueName: \"kubernetes.io/projected/573d3a02-e395-4816-963a-cd614ef53f75-kube-api-access-n959l\") pod \"openshift-config-operator-95bf4f4d-7kfrh\" (UID: \"573d3a02-e395-4816-963a-cd614ef53f75\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 09:04:19.594870 master-0 kubenswrapper[28766]: I0318 09:04:19.594807 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b689k\" (UniqueName: \"kubernetes.io/projected/e64ea71a-1e89-409a-9607-4d3cea093643-kube-api-access-b689k\") pod \"cloud-credential-operator-744f9dbf77-v8ft8\" (UID: \"e64ea71a-1e89-409a-9607-4d3cea093643\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-v8ft8" Mar 18 09:04:19.610238 master-0 kubenswrapper[28766]: I0318 09:04:19.610193 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkfql\" (UniqueName: \"kubernetes.io/projected/ad4cf9b2-4e66-4921-a30c-7b659bff06ab-kube-api-access-zkfql\") pod \"router-default-7dcf5569b5-8sbgd\" (UID: \"ad4cf9b2-4e66-4921-a30c-7b659bff06ab\") " pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:19.632107 master-0 kubenswrapper[28766]: I0318 09:04:19.632056 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w58l\" (UniqueName: \"kubernetes.io/projected/939efa41-8f40-4f91-bee4-0425aead9760-kube-api-access-8w58l\") pod \"etcd-operator-8544cbcf9c-f4jvq\" (UID: \"939efa41-8f40-4f91-bee4-0425aead9760\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f4jvq" Mar 18 09:04:19.650872 master-0 kubenswrapper[28766]: I0318 09:04:19.650812 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfjgn\" (UniqueName: \"kubernetes.io/projected/e2ade7e6-cecd-4e98-8f85-ea8219303d75-kube-api-access-vfjgn\") pod \"cluster-olm-operator-67dcd4998-zqxv2\" (UID: \"e2ade7e6-cecd-4e98-8f85-ea8219303d75\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-zqxv2" Mar 18 09:04:19.671697 master-0 kubenswrapper[28766]: I0318 09:04:19.671652 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ngk7\" (UniqueName: \"kubernetes.io/projected/07a4fd92-0fd1-4688-b2db-de615d75971e-kube-api-access-5ngk7\") pod \"network-operator-7bd846bfc4-5r5r4\" (UID: \"07a4fd92-0fd1-4688-b2db-de615d75971e\") " pod="openshift-network-operator/network-operator-7bd846bfc4-5r5r4" Mar 18 09:04:19.691020 master-0 kubenswrapper[28766]: I0318 09:04:19.690955 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchll\" (UniqueName: \"kubernetes.io/projected/29ba6765-61c9-4f78-8f44-570418000c5c-kube-api-access-xchll\") pod \"csi-snapshot-controller-64854d9cff-khm5n\" (UID: \"29ba6765-61c9-4f78-8f44-570418000c5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-khm5n" Mar 18 09:04:19.708176 master-0 kubenswrapper[28766]: I0318 09:04:19.708125 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5q4t\" (UniqueName: \"kubernetes.io/projected/d71aa1b9-6eb5-4331-b959-8930e10817b4-kube-api-access-x5q4t\") pod \"prometheus-operator-6c8df6d4b-8kgdq\" (UID: \"d71aa1b9-6eb5-4331-b959-8930e10817b4\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-8kgdq" Mar 18 09:04:19.732534 master-0 kubenswrapper[28766]: I0318 09:04:19.732460 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgs9m\" (UniqueName: \"kubernetes.io/projected/3e96b35f-c57a-4e01-82f7-894ea16ac5b8-kube-api-access-rgs9m\") pod \"machine-config-server-2jsz9\" (UID: \"3e96b35f-c57a-4e01-82f7-894ea16ac5b8\") " pod="openshift-machine-config-operator/machine-config-server-2jsz9" Mar 18 09:04:19.751481 master-0 kubenswrapper[28766]: I0318 09:04:19.751428 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm6nf\" (UniqueName: \"kubernetes.io/projected/52e32e2d-33ab-4351-ae8a-80acd6077d70-kube-api-access-dm6nf\") pod \"redhat-operators-pk9z9\" (UID: \"52e32e2d-33ab-4351-ae8a-80acd6077d70\") " pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:19.771262 master-0 kubenswrapper[28766]: I0318 09:04:19.771201 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxvk7\" (UniqueName: \"kubernetes.io/projected/b0280499-8277-46f0-bd8c-058a47a99e19-kube-api-access-dxvk7\") pod \"service-ca-operator-b865698dc-g2lc8\" (UID: \"b0280499-8277-46f0-bd8c-058a47a99e19\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-g2lc8" Mar 18 09:04:19.796885 master-0 kubenswrapper[28766]: I0318 09:04:19.796811 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bzxp\" (UniqueName: \"kubernetes.io/projected/f826efe0-60a1-4465-b8d0-d4069ed507a1-kube-api-access-6bzxp\") pod \"tuned-zzqc6\" (UID: \"f826efe0-60a1-4465-b8d0-d4069ed507a1\") " pod="openshift-cluster-node-tuning-operator/tuned-zzqc6" Mar 18 09:04:19.813932 master-0 kubenswrapper[28766]: I0318 09:04:19.813822 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a6ab2be-d018-4fd5-bfbb-6b88aec28663-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-9p4bb\" (UID: \"8a6ab2be-d018-4fd5-bfbb-6b88aec28663\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-9p4bb" Mar 18 09:04:19.832375 master-0 kubenswrapper[28766]: I0318 09:04:19.832263 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47p9x\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-kube-api-access-47p9x\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 09:04:19.849565 master-0 kubenswrapper[28766]: I0318 09:04:19.849495 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdvp\" (UniqueName: \"kubernetes.io/projected/866c259c-7661-4a80-873b-6fd625218665-kube-api-access-ftdvp\") pod \"iptables-alerter-9mkgd\" (UID: \"866c259c-7661-4a80-873b-6fd625218665\") " pod="openshift-network-operator/iptables-alerter-9mkgd" Mar 18 09:04:19.857027 master-0 kubenswrapper[28766]: I0318 09:04:19.856960 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:19.857027 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:19.857027 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:19.857027 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:19.857292 master-0 kubenswrapper[28766]: I0318 09:04:19.857061 28766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:19.858052 master-0 kubenswrapper[28766]: I0318 09:04:19.858011 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]controller ok Mar 18 09:04:19.858052 master-0 kubenswrapper[28766]: [-]backend-http failed: reason withheld Mar 18 09:04:19.858052 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:19.858052 master-0 kubenswrapper[28766]: I0318 09:04:19.858050 28766 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:19.878245 master-0 kubenswrapper[28766]: I0318 09:04:19.878186 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnspk\" (UniqueName: \"kubernetes.io/projected/40f3b7a4-107c-4f1d-a3ab-b5d2309c373b-kube-api-access-jnspk\") pod \"machine-config-operator-84d549f6d5-4hj54\" (UID: \"40f3b7a4-107c-4f1d-a3ab-b5d2309c373b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4hj54" Mar 18 09:04:19.893567 master-0 kubenswrapper[28766]: I0318 09:04:19.893514 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glt6c\" (UniqueName: \"kubernetes.io/projected/edc7f629-4288-443b-aa8e-78bc6a09c848-kube-api-access-glt6c\") pod \"ovnkube-control-plane-57f769d897-bwqt7\" (UID: \"edc7f629-4288-443b-aa8e-78bc6a09c848\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-bwqt7" Mar 18 09:04:19.910837 master-0 kubenswrapper[28766]: I0318 09:04:19.910780 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d89af2f-47f5-4ee5-a790-e162c2dee3ce-kube-api-access\") pod \"cluster-version-operator-7d58488df-8btcx\" (UID: \"8d89af2f-47f5-4ee5-a790-e162c2dee3ce\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-8btcx" Mar 18 09:04:19.934767 master-0 kubenswrapper[28766]: I0318 09:04:19.934546 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpl2c\" (UniqueName: \"kubernetes.io/projected/fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4-kube-api-access-hpl2c\") pod \"multus-bpf5c\" (UID: \"fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4\") " pod="openshift-multus/multus-bpf5c" Mar 18 09:04:19.956447 master-0 kubenswrapper[28766]: I0318 09:04:19.953466 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qrqx\" (UniqueName: \"kubernetes.io/projected/495e0cff-fca8-4dad-9247-2fc0e7ce86fc-kube-api-access-5qrqx\") pod \"machine-approver-5c6485487f-87vpl\" (UID: \"495e0cff-fca8-4dad-9247-2fc0e7ce86fc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-87vpl" Mar 18 09:04:19.968996 master-0 kubenswrapper[28766]: I0318 09:04:19.968935 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtz82\" (UniqueName: \"kubernetes.io/projected/18921497-d8ed-42d8-bf3c-a027566ebe85-kube-api-access-vtz82\") pod \"cluster-samples-operator-85f7577d78-swcvh\" (UID: \"18921497-d8ed-42d8-bf3c-a027566ebe85\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-swcvh" Mar 18 09:04:19.992096 master-0 kubenswrapper[28766]: I0318 09:04:19.992023 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msp8\" (UniqueName: \"kubernetes.io/projected/34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe-kube-api-access-2msp8\") pod \"marketplace-operator-89ccd998f-bcwsv\" (UID: \"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:04:19.999660 master-0 kubenswrapper[28766]: I0318 09:04:19.999604 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:20.000410 master-0 kubenswrapper[28766]: I0318 09:04:20.000347 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/91a6fa86-8c58-43bc-a2d4-2b20901269f7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-dblgh\" (UID: \"91a6fa86-8c58-43bc-a2d4-2b20901269f7\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-dblgh" Mar 18 09:04:20.015724 master-0 kubenswrapper[28766]: I0318 09:04:20.015655 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/260c8aa5-a288-4ee8-b671-f97e90a2f39c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-fxn82\" (UID: \"260c8aa5-a288-4ee8-b671-f97e90a2f39c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-fxn82" Mar 18 09:04:20.036763 master-0 kubenswrapper[28766]: I0318 09:04:20.033487 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djq7n\" (UniqueName: \"kubernetes.io/projected/f65344cd-8571-4a78-927f-eec46ec1af51-kube-api-access-djq7n\") pod \"redhat-marketplace-jg58c\" (UID: \"f65344cd-8571-4a78-927f-eec46ec1af51\") " pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:20.057656 master-0 kubenswrapper[28766]: I0318 09:04:20.057474 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp77s\" (UniqueName: \"kubernetes.io/projected/b35ab145-16a7-4ef1-86e8-0afb6ff469fd-kube-api-access-tp77s\") pod \"dns-default-ck7b5\" (UID: \"b35ab145-16a7-4ef1-86e8-0afb6ff469fd\") " pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:20.071544 master-0 kubenswrapper[28766]: I0318 09:04:20.071487 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lv7n\" (UniqueName: \"kubernetes.io/projected/92542f7c-182b-45a8-bbf3-00e99ba7acee-kube-api-access-4lv7n\") pod \"community-operators-78szh\" (UID: \"92542f7c-182b-45a8-bbf3-00e99ba7acee\") " pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:20.086650 master-0 kubenswrapper[28766]: I0318 09:04:20.086533 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpj79\" (UniqueName: \"kubernetes.io/projected/b5f9f50b-e7b4-4b81-864b-349303f21447-kube-api-access-bpj79\") pod \"apiserver-7bb69b5c5c-djsr9\" (UID: \"b5f9f50b-e7b4-4b81-864b-349303f21447\") " pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:20.102312 master-0 kubenswrapper[28766]: I0318 09:04:20.102233 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.102555 master-0 kubenswrapper[28766]: I0318 09:04:20.102388 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.102555 master-0 kubenswrapper[28766]: I0318 09:04:20.102439 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.102555 master-0 kubenswrapper[28766]: I0318 09:04:20.102482 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.102555 master-0 kubenswrapper[28766]: I0318 09:04:20.102516 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.102823 master-0 kubenswrapper[28766]: I0318 09:04:20.102582 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 09:04:20.102823 master-0 kubenswrapper[28766]: I0318 09:04:20.102747 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.103116 master-0 kubenswrapper[28766]: I0318 09:04:20.103080 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.103401 master-0 kubenswrapper[28766]: I0318 09:04:20.103369 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.103612 master-0 kubenswrapper[28766]: I0318 09:04:20.103582 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-secret-telemeter-client\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.103817 master-0 kubenswrapper[28766]: I0318 09:04:20.103796 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-telemeter-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.104053 master-0 kubenswrapper[28766]: I0318 09:04:20.104032 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0bb044f-5a4e-4981-8084-91348ce1a56a-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 09:04:20.104121 master-0 kubenswrapper[28766]: I0318 09:04:20.104064 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e5ae1886-f90c-49f4-bf08-055b55dd785a-federate-client-tls\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.104231 master-0 kubenswrapper[28766]: I0318 09:04:20.104210 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ae1886-f90c-49f4-bf08-055b55dd785a-serving-certs-ca-bundle\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.109003 master-0 kubenswrapper[28766]: I0318 09:04:20.108961 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlvd\" (UniqueName: \"kubernetes.io/projected/fc5a9875-d97e-4371-a15d-a1f43b85abce-kube-api-access-mvlvd\") pod \"cluster-storage-operator-7d87854d6-srhr6\" (UID: \"fc5a9875-d97e-4371-a15d-a1f43b85abce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-srhr6" Mar 18 09:04:20.128750 master-0 kubenswrapper[28766]: I0318 09:04:20.128674 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrdc\" (UniqueName: \"kubernetes.io/projected/e7b72267-fc08-41ed-a92b-9fca7372aba6-kube-api-access-dwrdc\") pod \"cluster-monitoring-operator-58845fbb57-nc7hf\" (UID: \"e7b72267-fc08-41ed-a92b-9fca7372aba6\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-nc7hf" Mar 18 09:04:20.152347 master-0 kubenswrapper[28766]: I0318 09:04:20.152266 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqlhh\" (UniqueName: \"kubernetes.io/projected/68465463-5f2a-4e74-9c34-2706a185f7ea-kube-api-access-gqlhh\") pod \"node-resolver-zwl77\" (UID: \"68465463-5f2a-4e74-9c34-2706a185f7ea\") " pod="openshift-dns/node-resolver-zwl77" Mar 18 09:04:20.172221 master-0 kubenswrapper[28766]: I0318 09:04:20.171699 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqldd\" (UniqueName: \"kubernetes.io/projected/2700f537-8f31-4380-a527-3e697a8122cc-kube-api-access-dqldd\") pod \"apiserver-556c8fbcff-5shs8\" (UID: \"2700f537-8f31-4380-a527-3e697a8122cc\") " pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:20.192840 master-0 kubenswrapper[28766]: I0318 09:04:20.192790 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5982111d-f4c6-4335-9b40-3142758fc2bc-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-jshg7\" (UID: \"5982111d-f4c6-4335-9b40-3142758fc2bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-jshg7" Mar 18 09:04:20.215571 master-0 kubenswrapper[28766]: I0318 09:04:20.214742 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj9fr\" (UniqueName: \"kubernetes.io/projected/2207df9e-f21e-4c30-98d5-248ae99c245e-kube-api-access-cj9fr\") pod \"ovnkube-node-cxws9\" (UID: \"2207df9e-f21e-4c30-98d5-248ae99c245e\") " pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:20.229712 master-0 kubenswrapper[28766]: I0318 09:04:20.229659 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfzdp\" (UniqueName: \"kubernetes.io/projected/a268d595-18c2-43a2-8ed5-eb64c76c490f-kube-api-access-hfzdp\") pod \"certified-operators-vng9w\" (UID: \"a268d595-18c2-43a2-8ed5-eb64c76c490f\") " pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:20.248348 master-0 kubenswrapper[28766]: I0318 09:04:20.248301 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6zq8\" (UniqueName: \"kubernetes.io/projected/d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29-kube-api-access-x6zq8\") pod \"network-metrics-daemon-6x85n\" (UID: \"d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29\") " pod="openshift-multus/network-metrics-daemon-6x85n" Mar 18 09:04:20.267447 master-0 kubenswrapper[28766]: I0318 09:04:20.267394 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxcn\" (UniqueName: \"kubernetes.io/projected/6fb1f871-9c24-48a1-a15a-a636b5bb687d-kube-api-access-wxxcn\") pod \"csi-snapshot-controller-operator-5f5d689c6b-j8kgj\" (UID: \"6fb1f871-9c24-48a1-a15a-a636b5bb687d\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-j8kgj" Mar 18 09:04:20.291741 master-0 kubenswrapper[28766]: I0318 09:04:20.291687 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vfrs\" (UniqueName: \"kubernetes.io/projected/ffc5379c-651f-490c-90f4-1285b9093596-kube-api-access-4vfrs\") pod \"cluster-autoscaler-operator-866dc4744-lxj7x\" (UID: \"ffc5379c-651f-490c-90f4-1285b9093596\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lxj7x" Mar 18 09:04:20.307754 master-0 kubenswrapper[28766]: I0318 09:04:20.307690 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz26d\" (UniqueName: \"kubernetes.io/projected/b065df33-7911-456e-b3a2-1f8c8d53e053-kube-api-access-pz26d\") pod \"catalog-operator-68f85b4d6c-swdsh\" (UID: \"b065df33-7911-456e-b3a2-1f8c8d53e053\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 09:04:20.313116 master-0 kubenswrapper[28766]: I0318 09:04:20.313079 28766 request.go:700] Waited for 3.951286985s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token Mar 18 09:04:20.326840 master-0 kubenswrapper[28766]: I0318 09:04:20.326767 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7lrl\" (UniqueName: \"kubernetes.io/projected/fc289a83-9a2e-404b-b148-605639362703-kube-api-access-l7lrl\") pod \"network-check-target-8b7l7\" (UID: \"fc289a83-9a2e-404b-b148-605639362703\") " pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 09:04:20.346286 master-0 kubenswrapper[28766]: I0318 09:04:20.346198 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw5tw\" (UniqueName: \"kubernetes.io/projected/b9768e50-c883-47b0-b319-851fa53ac19a-kube-api-access-bw5tw\") pod \"machine-api-operator-6fbb6cf6f9-z6nw9\" (UID: \"b9768e50-c883-47b0-b319-851fa53ac19a\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-z6nw9" Mar 18 09:04:20.370081 master-0 kubenswrapper[28766]: I0318 09:04:20.370030 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9jq\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-kube-api-access-tk9jq\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 09:04:20.386013 master-0 kubenswrapper[28766]: I0318 09:04:20.385962 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7962fb40-1170-4c00-b1bf-92966aeae807-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-vxsth\" (UID: \"7962fb40-1170-4c00-b1bf-92966aeae807\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-vxsth" Mar 18 09:04:20.411599 master-0 kubenswrapper[28766]: I0318 09:04:20.411527 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czm78\" (UniqueName: \"kubernetes.io/projected/d6fe8ee6-737e-438a-8d9d-1ec712f6bacf-kube-api-access-czm78\") pod \"control-plane-machine-set-operator-6f97756bc8-z9n9c\" (UID: \"d6fe8ee6-737e-438a-8d9d-1ec712f6bacf\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-z9n9c" Mar 18 09:04:20.432414 master-0 kubenswrapper[28766]: I0318 09:04:20.432348 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbsfs\" (UniqueName: \"kubernetes.io/projected/336e741d-ac9a-4b94-9fbb-c9010e37c2d0-kube-api-access-hbsfs\") pod \"machine-config-controller-b4f87c5b9-nm47n\" (UID: \"336e741d-ac9a-4b94-9fbb-c9010e37c2d0\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-nm47n" Mar 18 09:04:20.450304 master-0 kubenswrapper[28766]: I0318 09:04:20.450230 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdhs\" (UniqueName: \"kubernetes.io/projected/ec11012b-536a-422f-afc4-d2d0fd4b67fb-kube-api-access-svdhs\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg\" (UID: \"ec11012b-536a-422f-afc4-d2d0fd4b67fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg" Mar 18 09:04:20.468183 master-0 kubenswrapper[28766]: I0318 09:04:20.468096 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52pj\" (UniqueName: \"kubernetes.io/projected/43fbd379-dd1e-4287-bd76-fd3ec51cde43-kube-api-access-c52pj\") pod \"catalogd-controller-manager-6864dc98f7-phjp8\" (UID: \"43fbd379-dd1e-4287-bd76-fd3ec51cde43\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:20.492892 master-0 kubenswrapper[28766]: I0318 09:04:20.490157 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bwv\" (UniqueName: \"kubernetes.io/projected/8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8-kube-api-access-d2bwv\") pod \"migrator-8487694857-ld5l8\" (UID: \"8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ld5l8" Mar 18 09:04:20.512891 master-0 kubenswrapper[28766]: I0318 09:04:20.510877 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m5wf\" (UniqueName: \"kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf\") pod \"controller-manager-6448dc88d8-cnd9q\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:20.575706 master-0 kubenswrapper[28766]: I0318 09:04:20.575422 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw27k\" (UniqueName: \"kubernetes.io/projected/c110b293-2c6b-496b-b015-23aada98cb4b-kube-api-access-lw27k\") pod \"authentication-operator-5885bfd7f4-5g8tz\" (UID: \"c110b293-2c6b-496b-b015-23aada98cb4b\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-5g8tz" Mar 18 09:04:20.590065 master-0 kubenswrapper[28766]: I0318 09:04:20.588248 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhfh\" (UniqueName: \"kubernetes.io/projected/31a92270-efed-44fe-871e-90333235e85f-kube-api-access-8zhfh\") pod \"insights-operator-68bf6ff9d6-kv7n5\" (UID: \"31a92270-efed-44fe-871e-90333235e85f\") " pod="openshift-insights/insights-operator-68bf6ff9d6-kv7n5" Mar 18 09:04:20.590913 master-0 kubenswrapper[28766]: I0318 09:04:20.590865 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hn9w\" (UniqueName: \"kubernetes.io/projected/3d9fe248-ba87-47e3-911a-1b2b112b5683-kube-api-access-4hn9w\") pod \"olm-operator-5c9796789-sl5kr\" (UID: \"3d9fe248-ba87-47e3-911a-1b2b112b5683\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 09:04:20.591410 master-0 kubenswrapper[28766]: I0318 09:04:20.591373 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9-bound-sa-token\") pod \"ingress-operator-66b84d69b-7h94d\" (UID: \"94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-7h94d" Mar 18 09:04:20.614147 master-0 kubenswrapper[28766]: I0318 09:04:20.613996 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltlf6\" (UniqueName: \"kubernetes.io/projected/06cbd48a-1f1d-4734-8d57-e1b6824879b6-kube-api-access-ltlf6\") pod \"openshift-state-metrics-5dc6c74576-dsq5f\" (UID: \"06cbd48a-1f1d-4734-8d57-e1b6824879b6\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-dsq5f" Mar 18 09:04:20.638892 master-0 kubenswrapper[28766]: I0318 09:04:20.638795 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8l2\" (UniqueName: \"kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2\") pod \"metrics-server-59f88c66c8-z4c2f\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:20.645749 master-0 kubenswrapper[28766]: E0318 09:04:20.645712 28766 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:04:20.645749 master-0 kubenswrapper[28766]: E0318 09:04:20.645745 28766 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:04:20.645890 master-0 kubenswrapper[28766]: E0318 09:04:20.645808 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access podName:e0d127be-2d13-449b-915b-2d49052baf02 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:21.14578798 +0000 UTC m=+14.160046656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access") pod "installer-3-master-0" (UID: "e0d127be-2d13-449b-915b-2d49052baf02") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:04:20.666319 master-0 kubenswrapper[28766]: I0318 09:04:20.666269 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fql4\" (UniqueName: \"kubernetes.io/projected/e5ae1886-f90c-49f4-bf08-055b55dd785a-kube-api-access-4fql4\") pod \"telemeter-client-5d4d5995f-s5dw8\" (UID: \"e5ae1886-f90c-49f4-bf08-055b55dd785a\") " pod="openshift-monitoring/telemeter-client-5d4d5995f-s5dw8" Mar 18 09:04:20.686584 master-0 kubenswrapper[28766]: I0318 09:04:20.686452 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r7hx\" (UniqueName: \"kubernetes.io/projected/4146a62d-e37b-4295-90ca-b23f5e3d1112-kube-api-access-4r7hx\") pod \"node-exporter-75szk\" (UID: \"4146a62d-e37b-4295-90ca-b23f5e3d1112\") " pod="openshift-monitoring/node-exporter-75szk" Mar 18 09:04:20.707721 master-0 kubenswrapper[28766]: I0318 09:04:20.707633 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks4jl\" (UniqueName: \"kubernetes.io/projected/e0bb044f-5a4e-4981-8084-91348ce1a56a-kube-api-access-ks4jl\") pod \"multus-admission-controller-58c9f8fc64-zgrts\" (UID: \"e0bb044f-5a4e-4981-8084-91348ce1a56a\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-zgrts" Mar 18 09:04:20.710900 master-0 kubenswrapper[28766]: I0318 09:04:20.710826 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access\") pod \"e0d127be-2d13-449b-915b-2d49052baf02\" (UID: \"e0d127be-2d13-449b-915b-2d49052baf02\") " Mar 18 09:04:20.714129 master-0 kubenswrapper[28766]: I0318 09:04:20.714064 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e0d127be-2d13-449b-915b-2d49052baf02" (UID: "e0d127be-2d13-449b-915b-2d49052baf02"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:20.727305 master-0 kubenswrapper[28766]: I0318 09:04:20.727246 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttnk9\" (UniqueName: \"kubernetes.io/projected/d0272f7c-bedc-44cf-9790-88e10e6dda03-kube-api-access-ttnk9\") pod \"ingress-canary-mpw9b\" (UID: \"d0272f7c-bedc-44cf-9790-88e10e6dda03\") " pod="openshift-ingress-canary/ingress-canary-mpw9b" Mar 18 09:04:20.750249 master-0 kubenswrapper[28766]: I0318 09:04:20.748722 28766 scope.go:117] "RemoveContainer" containerID="9e39226f66d3647b6d3e60dfa41a65af602b2c0ac717809011f105e2b66ccbc2" Mar 18 09:04:20.769186 master-0 kubenswrapper[28766]: I0318 09:04:20.769046 28766 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="49fac1b46a11e49501805e891baae4a9" killPodOptions="" Mar 18 09:04:20.769676 master-0 kubenswrapper[28766]: E0318 09:04:20.769635 28766 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.537s" Mar 18 09:04:20.769676 master-0 kubenswrapper[28766]: I0318 09:04:20.769670 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 09:04:20.769850 master-0 kubenswrapper[28766]: I0318 09:04:20.769696 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:20.769850 master-0 kubenswrapper[28766]: I0318 09:04:20.769812 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 09:04:20.770049 master-0 kubenswrapper[28766]: I0318 09:04:20.769875 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:20.770049 master-0 kubenswrapper[28766]: I0318 09:04:20.769914 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:20.770049 master-0 kubenswrapper[28766]: I0318 09:04:20.769927 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:04:20.807602 master-0 kubenswrapper[28766]: I0318 09:04:20.806329 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 18 09:04:20.807602 master-0 kubenswrapper[28766]: I0318 09:04:20.806813 28766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 09:04:20.816066 master-0 kubenswrapper[28766]: I0318 09:04:20.812722 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0d127be-2d13-449b-915b-2d49052baf02-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:20.857117 master-0 kubenswrapper[28766]: I0318 09:04:20.857071 28766 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 09:04:20.857240 master-0 kubenswrapper[28766]: I0318 09:04:20.857207 28766 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 09:04:20.858942 master-0 kubenswrapper[28766]: I0318 09:04:20.858677 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:20.858942 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:20.858942 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:20.858942 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:20.858942 master-0 kubenswrapper[28766]: I0318 09:04:20.858748 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876176 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876218 28766 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="a0d3f5cc-10b4-4bfe-8f71-c5053b35a5ba" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876321 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876347 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"e0d127be-2d13-449b-915b-2d49052baf02","Type":"ContainerDied","Data":"548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb"} Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876376 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548400f1bcdf7de3d454a40cdac983932202fdf4d758178348c7545ba7209bcb" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876402 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876418 28766 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="a0d3f5cc-10b4-4bfe-8f71-c5053b35a5ba" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876553 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876639 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876689 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.876729 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.877119 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-chjqr" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.877278 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.877378 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.877921 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878005 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878033 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878083 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878142 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878160 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5f48d895dc-ttr9f" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878187 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878232 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878254 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-q8ff6" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878271 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-7kfrh" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878281 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878292 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878311 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878331 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878346 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878354 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878373 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878393 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878403 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878423 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878443 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878471 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878509 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878540 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878558 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878576 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878617 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878681 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.878761 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.879321 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ck7b5" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.879421 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.879905 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.885252 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:20.887315 master-0 kubenswrapper[28766]: I0318 09:04:20.885694 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:20.893080 master-0 kubenswrapper[28766]: I0318 09:04:20.887827 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-sl5kr" Mar 18 09:04:20.893080 master-0 kubenswrapper[28766]: I0318 09:04:20.888182 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-swdsh" Mar 18 09:04:20.893080 master-0 kubenswrapper[28766]: I0318 09:04:20.888885 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-8b7l7" Mar 18 09:04:20.893080 master-0 kubenswrapper[28766]: I0318 09:04:20.888965 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-phjp8" Mar 18 09:04:20.912167 master-0 kubenswrapper[28766]: I0318 09:04:20.912113 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:20.933271 master-0 kubenswrapper[28766]: I0318 09:04:20.933218 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:20.935022 master-0 kubenswrapper[28766]: I0318 09:04:20.934983 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:20.935663 master-0 kubenswrapper[28766]: I0318 09:04:20.935633 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:20.936586 master-0 kubenswrapper[28766]: I0318 09:04:20.936550 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:21.071068 master-0 kubenswrapper[28766]: I0318 09:04:21.070989 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:21.132017 master-0 kubenswrapper[28766]: I0318 09:04:21.131952 28766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:21.643129 master-0 kubenswrapper[28766]: I0318 09:04:21.643061 28766 scope.go:117] "RemoveContainer" containerID="b0564925d47f5840821e3c795a9cfcae45b42d4975ada3f3aedc3639ab59cfb5" Mar 18 09:04:21.645338 master-0 kubenswrapper[28766]: I0318 09:04:21.645293 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 09:04:21.648139 master-0 kubenswrapper[28766]: I0318 09:04:21.648069 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b"} Mar 18 09:04:21.648493 master-0 kubenswrapper[28766]: I0318 09:04:21.648432 28766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:21.667120 master-0 kubenswrapper[28766]: I0318 09:04:21.667071 28766 scope.go:117] "RemoveContainer" containerID="5ec3e7108eee8c08ca66f6f618d1955dea098f10f4832f7e925bd7f46bce001f" Mar 18 09:04:21.692919 master-0 kubenswrapper[28766]: I0318 09:04:21.692521 28766 scope.go:117] "RemoveContainer" containerID="f2d4d2d49e0c856fff93c30b0d719c8529754ea148952a7ef6bb3db593f16a16" Mar 18 09:04:21.859196 master-0 kubenswrapper[28766]: I0318 09:04:21.859129 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:21.859196 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:21.859196 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:21.859196 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:21.859578 master-0 kubenswrapper[28766]: I0318 09:04:21.859224 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:22.660809 master-0 kubenswrapper[28766]: I0318 09:04:22.660745 28766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:22.661733 master-0 kubenswrapper[28766]: I0318 09:04:22.661672 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:22.859685 master-0 kubenswrapper[28766]: I0318 09:04:22.859601 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:22.859685 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:22.859685 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:22.859685 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:22.860141 master-0 kubenswrapper[28766]: I0318 09:04:22.859696 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:23.633052 master-0 kubenswrapper[28766]: I0318 09:04:23.632938 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=3.63291399 podStartE2EDuration="3.63291399s" podCreationTimestamp="2026-03-18 09:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:23.630727692 +0000 UTC m=+16.644986368" watchObservedRunningTime="2026-03-18 09:04:23.63291399 +0000 UTC m=+16.647172666" Mar 18 09:04:23.862252 master-0 kubenswrapper[28766]: I0318 09:04:23.862176 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:23.862252 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:23.862252 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:23.862252 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:23.863267 master-0 kubenswrapper[28766]: I0318 09:04:23.862263 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:24.859258 master-0 kubenswrapper[28766]: I0318 09:04:24.859176 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:24.859258 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:24.859258 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:24.859258 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:24.859580 master-0 kubenswrapper[28766]: I0318 09:04:24.859292 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:25.150454 master-0 kubenswrapper[28766]: I0318 09:04:25.150344 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7bb69b5c5c-djsr9" Mar 18 09:04:25.464492 master-0 kubenswrapper[28766]: I0318 09:04:25.464346 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-556c8fbcff-5shs8" Mar 18 09:04:25.858959 master-0 kubenswrapper[28766]: I0318 09:04:25.858890 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:25.858959 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:25.858959 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:25.858959 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:25.859296 master-0 kubenswrapper[28766]: I0318 09:04:25.859005 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:26.862727 master-0 kubenswrapper[28766]: I0318 09:04:26.862627 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:26.862727 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:26.862727 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:26.862727 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:26.863483 master-0 kubenswrapper[28766]: I0318 09:04:26.862762 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:27.858387 master-0 kubenswrapper[28766]: I0318 09:04:27.858319 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:27.858387 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:27.858387 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:27.858387 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:27.858748 master-0 kubenswrapper[28766]: I0318 09:04:27.858400 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:28.535683 master-0 kubenswrapper[28766]: I0318 09:04:28.535584 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:28.858014 master-0 kubenswrapper[28766]: I0318 09:04:28.857835 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:28.858014 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:28.858014 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:28.858014 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:28.858014 master-0 kubenswrapper[28766]: I0318 09:04:28.857939 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:28.994884 master-0 kubenswrapper[28766]: I0318 09:04:28.994641 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-hmnwh"] Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.994974 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3068e569-5a4e-4fc3-88f4-5684d093c8e6" containerName="installer" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.994992 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="3068e569-5a4e-4fc3-88f4-5684d093c8e6" containerName="installer" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995011 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995022 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995040 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-recovery-controller" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995052 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-recovery-controller" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995064 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995073 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995086 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995095 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995107 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" containerName="pruner" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995115 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" containerName="pruner" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995127 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995135 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995150 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" containerName="installer" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995159 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" containerName="installer" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995170 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edfa49b-d0e7-4324-aace-b115b41ddae0" containerName="installer" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995178 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edfa49b-d0e7-4324-aace-b115b41ddae0" containerName="installer" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995190 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: I0318 09:04:28.995200 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:28.995188 master-0 kubenswrapper[28766]: E0318 09:04:28.995215 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995225 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: E0318 09:04:28.995244 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d127be-2d13-449b-915b-2d49052baf02" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995253 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d127be-2d13-449b-915b-2d49052baf02" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: E0318 09:04:28.995265 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005a0b4c-8e2d-4483-98e9-55badf7099c5" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995274 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="005a0b4c-8e2d-4483-98e9-55badf7099c5" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: E0318 09:04:28.995287 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ecff6b2-dbd4-4366-873b-2170d0b76c0f" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995298 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ecff6b2-dbd4-4366-873b-2170d0b76c0f" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: E0318 09:04:28.995309 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-cert-syncer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995318 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-cert-syncer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: E0318 09:04:28.995335 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a1fcda-ce2f-4d14-ab37-10a21e30fc30" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995343 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a1fcda-ce2f-4d14-ab37-10a21e30fc30" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: E0318 09:04:28.995357 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995366 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: E0318 09:04:28.995380 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6fb9336-3f19-4220-93ee-a5a61e26340b" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995388 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6fb9336-3f19-4220-93ee-a5a61e26340b" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: E0318 09:04:28.995405 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d2bb97-ff93-4772-96fd-318fa62e3a87" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995414 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d2bb97-ff93-4772-96fd-318fa62e3a87" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995574 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edfa49b-d0e7-4324-aace-b115b41ddae0" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995605 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="97215428-2d5d-460f-947c-f2a490bc428d" containerName="assisted-installer-controller" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995619 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d2bb97-ff93-4772-96fd-318fa62e3a87" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995630 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="3068e569-5a4e-4fc3-88f4-5684d093c8e6" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995647 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995665 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ecff6b2-dbd4-4366-873b-2170d0b76c0f" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995683 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995699 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995715 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-cert-syncer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995733 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995746 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="kube-controller-manager-recovery-controller" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995761 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a1fcda-ce2f-4d14-ab37-10a21e30fc30" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995779 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995794 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6fb9336-3f19-4220-93ee-a5a61e26340b" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995809 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="005a0b4c-8e2d-4483-98e9-55badf7099c5" containerName="installer" Mar 18 09:04:28.995798 master-0 kubenswrapper[28766]: I0318 09:04:28.995825 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d127be-2d13-449b-915b-2d49052baf02" containerName="installer" Mar 18 09:04:28.997077 master-0 kubenswrapper[28766]: I0318 09:04:28.995844 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb95119-ed96-428c-8a9b-7e29f48b5d4b" containerName="installer" Mar 18 09:04:28.997077 master-0 kubenswrapper[28766]: I0318 09:04:28.995878 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d" containerName="pruner" Mar 18 09:04:28.997077 master-0 kubenswrapper[28766]: I0318 09:04:28.995889 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c229b92d307e46237f6273edcc98d387" containerName="cluster-policy-controller" Mar 18 09:04:28.997077 master-0 kubenswrapper[28766]: I0318 09:04:28.996436 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:28.999317 master-0 kubenswrapper[28766]: I0318 09:04:28.999276 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 09:04:29.002186 master-0 kubenswrapper[28766]: I0318 09:04:28.999435 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 09:04:29.002186 master-0 kubenswrapper[28766]: I0318 09:04:28.999522 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 09:04:29.002186 master-0 kubenswrapper[28766]: I0318 09:04:28.999821 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 09:04:29.008739 master-0 kubenswrapper[28766]: I0318 09:04:29.008541 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 09:04:29.010200 master-0 kubenswrapper[28766]: I0318 09:04:29.010025 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-hmnwh"] Mar 18 09:04:29.175847 master-0 kubenswrapper[28766]: I0318 09:04:29.175607 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-trusted-ca\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.176132 master-0 kubenswrapper[28766]: I0318 09:04:29.175973 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-serving-cert\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.176132 master-0 kubenswrapper[28766]: I0318 09:04:29.176076 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxwwn\" (UniqueName: \"kubernetes.io/projected/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-kube-api-access-sxwwn\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.176389 master-0 kubenswrapper[28766]: I0318 09:04:29.176353 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-config\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.278422 master-0 kubenswrapper[28766]: I0318 09:04:29.278293 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-config\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.278422 master-0 kubenswrapper[28766]: I0318 09:04:29.278433 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-trusted-ca\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.278996 master-0 kubenswrapper[28766]: I0318 09:04:29.278481 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-serving-cert\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.278996 master-0 kubenswrapper[28766]: I0318 09:04:29.278514 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxwwn\" (UniqueName: \"kubernetes.io/projected/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-kube-api-access-sxwwn\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.279743 master-0 kubenswrapper[28766]: I0318 09:04:29.279688 28766 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 09:04:29.281023 master-0 kubenswrapper[28766]: I0318 09:04:29.280927 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-config\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.282068 master-0 kubenswrapper[28766]: I0318 09:04:29.281996 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-trusted-ca\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.282735 master-0 kubenswrapper[28766]: I0318 09:04:29.282642 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-serving-cert\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.312946 master-0 kubenswrapper[28766]: I0318 09:04:29.312312 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxwwn\" (UniqueName: \"kubernetes.io/projected/c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075-kube-api-access-sxwwn\") pod \"console-operator-76b6568d85-hmnwh\" (UID: \"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075\") " pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.324575 master-0 kubenswrapper[28766]: I0318 09:04:29.324466 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:29.860020 master-0 kubenswrapper[28766]: I0318 09:04:29.859710 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:29.860020 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:29.860020 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:29.860020 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:29.860020 master-0 kubenswrapper[28766]: I0318 09:04:29.859835 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:29.886589 master-0 kubenswrapper[28766]: I0318 09:04:29.886491 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-hmnwh"] Mar 18 09:04:29.906168 master-0 kubenswrapper[28766]: W0318 09:04:29.906080 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7d313bd_ea1e_4ebf_a6a9_4e17ae4e4075.slice/crio-17cb62a32d151927dc0d5c7df639309555ffab9e6f186d2c0b6a3e4ffa008d05 WatchSource:0}: Error finding container 17cb62a32d151927dc0d5c7df639309555ffab9e6f186d2c0b6a3e4ffa008d05: Status 404 returned error can't find the container with id 17cb62a32d151927dc0d5c7df639309555ffab9e6f186d2c0b6a3e4ffa008d05 Mar 18 09:04:29.908717 master-0 kubenswrapper[28766]: I0318 09:04:29.908667 28766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 09:04:29.937384 master-0 kubenswrapper[28766]: I0318 09:04:29.937299 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pk9z9" Mar 18 09:04:30.236383 master-0 kubenswrapper[28766]: I0318 09:04:30.236310 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-78szh" Mar 18 09:04:30.255541 master-0 kubenswrapper[28766]: I0318 09:04:30.255443 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jg58c" Mar 18 09:04:30.546173 master-0 kubenswrapper[28766]: I0318 09:04:30.546011 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vng9w" Mar 18 09:04:30.729041 master-0 kubenswrapper[28766]: I0318 09:04:30.728977 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" event={"ID":"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075","Type":"ContainerStarted","Data":"17cb62a32d151927dc0d5c7df639309555ffab9e6f186d2c0b6a3e4ffa008d05"} Mar 18 09:04:30.859719 master-0 kubenswrapper[28766]: I0318 09:04:30.859539 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:30.859719 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:30.859719 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:30.859719 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:30.859719 master-0 kubenswrapper[28766]: I0318 09:04:30.859687 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:31.860495 master-0 kubenswrapper[28766]: I0318 09:04:31.860365 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:31.860495 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:31.860495 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:31.860495 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:31.860495 master-0 kubenswrapper[28766]: I0318 09:04:31.860493 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:32.859219 master-0 kubenswrapper[28766]: I0318 09:04:32.859128 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:32.859219 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:32.859219 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:32.859219 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:32.859620 master-0 kubenswrapper[28766]: I0318 09:04:32.859248 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:33.758278 master-0 kubenswrapper[28766]: I0318 09:04:33.758246 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-hmnwh_c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075/console-operator/0.log" Mar 18 09:04:33.759008 master-0 kubenswrapper[28766]: I0318 09:04:33.758987 28766 generic.go:334] "Generic (PLEG): container finished" podID="c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075" containerID="ec62bdbdd825b60f76e676fd55d948ffeb3f7ae2d2416bcb797b3c9e737594ab" exitCode=255 Mar 18 09:04:33.759198 master-0 kubenswrapper[28766]: I0318 09:04:33.759133 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" event={"ID":"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075","Type":"ContainerDied","Data":"ec62bdbdd825b60f76e676fd55d948ffeb3f7ae2d2416bcb797b3c9e737594ab"} Mar 18 09:04:33.759616 master-0 kubenswrapper[28766]: I0318 09:04:33.759585 28766 scope.go:117] "RemoveContainer" containerID="ec62bdbdd825b60f76e676fd55d948ffeb3f7ae2d2416bcb797b3c9e737594ab" Mar 18 09:04:33.860022 master-0 kubenswrapper[28766]: I0318 09:04:33.859953 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:33.860022 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:33.860022 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:33.860022 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:33.860479 master-0 kubenswrapper[28766]: I0318 09:04:33.860038 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:34.772546 master-0 kubenswrapper[28766]: I0318 09:04:34.772489 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-hmnwh_c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075/console-operator/1.log" Mar 18 09:04:34.773553 master-0 kubenswrapper[28766]: I0318 09:04:34.773342 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-hmnwh_c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075/console-operator/0.log" Mar 18 09:04:34.773553 master-0 kubenswrapper[28766]: I0318 09:04:34.773393 28766 generic.go:334] "Generic (PLEG): container finished" podID="c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075" containerID="336d013c9841ac504e593a5fd8d356f16422167dab08d0fec9503ddf83d85897" exitCode=255 Mar 18 09:04:34.773553 master-0 kubenswrapper[28766]: I0318 09:04:34.773429 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" event={"ID":"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075","Type":"ContainerDied","Data":"336d013c9841ac504e593a5fd8d356f16422167dab08d0fec9503ddf83d85897"} Mar 18 09:04:34.773553 master-0 kubenswrapper[28766]: I0318 09:04:34.773474 28766 scope.go:117] "RemoveContainer" containerID="ec62bdbdd825b60f76e676fd55d948ffeb3f7ae2d2416bcb797b3c9e737594ab" Mar 18 09:04:34.774508 master-0 kubenswrapper[28766]: I0318 09:04:34.774039 28766 scope.go:117] "RemoveContainer" containerID="336d013c9841ac504e593a5fd8d356f16422167dab08d0fec9503ddf83d85897" Mar 18 09:04:34.774508 master-0 kubenswrapper[28766]: E0318 09:04:34.774308 28766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-76b6568d85-hmnwh_openshift-console-operator(c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075)\"" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" podUID="c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075" Mar 18 09:04:34.879887 master-0 kubenswrapper[28766]: I0318 09:04:34.876193 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:34.879887 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:34.879887 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:34.879887 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:34.879887 master-0 kubenswrapper[28766]: I0318 09:04:34.876314 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:35.788288 master-0 kubenswrapper[28766]: I0318 09:04:35.788194 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-hmnwh_c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075/console-operator/1.log" Mar 18 09:04:35.789465 master-0 kubenswrapper[28766]: I0318 09:04:35.788797 28766 scope.go:117] "RemoveContainer" containerID="336d013c9841ac504e593a5fd8d356f16422167dab08d0fec9503ddf83d85897" Mar 18 09:04:35.789465 master-0 kubenswrapper[28766]: E0318 09:04:35.789080 28766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-76b6568d85-hmnwh_openshift-console-operator(c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075)\"" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" podUID="c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075" Mar 18 09:04:35.859010 master-0 kubenswrapper[28766]: I0318 09:04:35.858894 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:35.859010 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:35.859010 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:35.859010 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:35.859629 master-0 kubenswrapper[28766]: I0318 09:04:35.859040 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:36.858750 master-0 kubenswrapper[28766]: I0318 09:04:36.858674 28766 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-8sbgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:04:36.858750 master-0 kubenswrapper[28766]: [-]has-synced failed: reason withheld Mar 18 09:04:36.858750 master-0 kubenswrapper[28766]: [+]process-running ok Mar 18 09:04:36.858750 master-0 kubenswrapper[28766]: healthz check failed Mar 18 09:04:36.859499 master-0 kubenswrapper[28766]: I0318 09:04:36.858787 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" podUID="ad4cf9b2-4e66-4921-a30c-7b659bff06ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:04:37.449173 master-0 kubenswrapper[28766]: I0318 09:04:37.449112 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:04:37.641626 master-0 kubenswrapper[28766]: I0318 09:04:37.641567 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:37.641940 master-0 kubenswrapper[28766]: I0318 09:04:37.641834 28766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:37.665718 master-0 kubenswrapper[28766]: I0318 09:04:37.665675 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cxws9" Mar 18 09:04:37.859775 master-0 kubenswrapper[28766]: I0318 09:04:37.859672 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:37.863835 master-0 kubenswrapper[28766]: I0318 09:04:37.863773 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-8sbgd" Mar 18 09:04:39.325753 master-0 kubenswrapper[28766]: I0318 09:04:39.325666 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:39.325753 master-0 kubenswrapper[28766]: I0318 09:04:39.325767 28766 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:39.326780 master-0 kubenswrapper[28766]: I0318 09:04:39.326731 28766 scope.go:117] "RemoveContainer" containerID="336d013c9841ac504e593a5fd8d356f16422167dab08d0fec9503ddf83d85897" Mar 18 09:04:39.327197 master-0 kubenswrapper[28766]: E0318 09:04:39.327140 28766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-76b6568d85-hmnwh_openshift-console-operator(c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075)\"" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" podUID="c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075" Mar 18 09:04:39.493581 master-0 kubenswrapper[28766]: I0318 09:04:39.493516 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6dd57c659d-b5n72"] Mar 18 09:04:39.494835 master-0 kubenswrapper[28766]: I0318 09:04:39.494600 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.499387 master-0 kubenswrapper[28766]: I0318 09:04:39.499338 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 09:04:39.499387 master-0 kubenswrapper[28766]: I0318 09:04:39.499357 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 09:04:39.499638 master-0 kubenswrapper[28766]: I0318 09:04:39.499411 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 09:04:39.500042 master-0 kubenswrapper[28766]: I0318 09:04:39.500000 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6n58x" Mar 18 09:04:39.500283 master-0 kubenswrapper[28766]: I0318 09:04:39.500228 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 09:04:39.500452 master-0 kubenswrapper[28766]: I0318 09:04:39.500413 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 09:04:39.500547 master-0 kubenswrapper[28766]: I0318 09:04:39.500524 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 09:04:39.502407 master-0 kubenswrapper[28766]: I0318 09:04:39.502365 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 09:04:39.502554 master-0 kubenswrapper[28766]: I0318 09:04:39.502412 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 09:04:39.502554 master-0 kubenswrapper[28766]: I0318 09:04:39.502422 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 09:04:39.502554 master-0 kubenswrapper[28766]: I0318 09:04:39.502491 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 09:04:39.502554 master-0 kubenswrapper[28766]: I0318 09:04:39.502530 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 09:04:39.511062 master-0 kubenswrapper[28766]: I0318 09:04:39.511002 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 09:04:39.523940 master-0 kubenswrapper[28766]: I0318 09:04:39.523889 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6dd57c659d-b5n72"] Mar 18 09:04:39.536957 master-0 kubenswrapper[28766]: I0318 09:04:39.535769 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 09:04:39.618925 master-0 kubenswrapper[28766]: I0318 09:04:39.618721 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619186 master-0 kubenswrapper[28766]: I0318 09:04:39.618968 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-login\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619186 master-0 kubenswrapper[28766]: I0318 09:04:39.619070 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619339 master-0 kubenswrapper[28766]: I0318 09:04:39.619177 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619339 master-0 kubenswrapper[28766]: I0318 09:04:39.619254 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619472 master-0 kubenswrapper[28766]: I0318 09:04:39.619338 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44lb\" (UniqueName: \"kubernetes.io/projected/49162cd5-4038-4e1b-bbd2-26fbdace96aa-kube-api-access-w44lb\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619472 master-0 kubenswrapper[28766]: I0318 09:04:39.619423 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-error\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619618 master-0 kubenswrapper[28766]: I0318 09:04:39.619499 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619618 master-0 kubenswrapper[28766]: I0318 09:04:39.619572 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-session\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619755 master-0 kubenswrapper[28766]: I0318 09:04:39.619627 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-policies\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619755 master-0 kubenswrapper[28766]: I0318 09:04:39.619686 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619920 master-0 kubenswrapper[28766]: I0318 09:04:39.619793 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.619920 master-0 kubenswrapper[28766]: I0318 09:04:39.619893 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-dir\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721080 master-0 kubenswrapper[28766]: I0318 09:04:39.721018 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44lb\" (UniqueName: \"kubernetes.io/projected/49162cd5-4038-4e1b-bbd2-26fbdace96aa-kube-api-access-w44lb\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721333 master-0 kubenswrapper[28766]: I0318 09:04:39.721094 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-error\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721333 master-0 kubenswrapper[28766]: I0318 09:04:39.721127 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721333 master-0 kubenswrapper[28766]: I0318 09:04:39.721162 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-session\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721333 master-0 kubenswrapper[28766]: I0318 09:04:39.721198 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-policies\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721333 master-0 kubenswrapper[28766]: I0318 09:04:39.721218 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721637 master-0 kubenswrapper[28766]: I0318 09:04:39.721610 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721765 master-0 kubenswrapper[28766]: I0318 09:04:39.721740 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-dir\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.721897 master-0 kubenswrapper[28766]: I0318 09:04:39.721863 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-dir\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.722021 master-0 kubenswrapper[28766]: I0318 09:04:39.721985 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.722312 master-0 kubenswrapper[28766]: I0318 09:04:39.722287 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.722396 master-0 kubenswrapper[28766]: I0318 09:04:39.722298 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-login\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.722508 master-0 kubenswrapper[28766]: I0318 09:04:39.722490 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.722653 master-0 kubenswrapper[28766]: I0318 09:04:39.722633 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.722786 master-0 kubenswrapper[28766]: I0318 09:04:39.722766 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.723110 master-0 kubenswrapper[28766]: I0318 09:04:39.723047 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.723186 master-0 kubenswrapper[28766]: E0318 09:04:39.722686 28766 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:39.723186 master-0 kubenswrapper[28766]: I0318 09:04:39.723137 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-policies\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.723274 master-0 kubenswrapper[28766]: E0318 09:04:39.723203 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig podName:49162cd5-4038-4e1b-bbd2-26fbdace96aa nodeName:}" failed. No retries permitted until 2026-03-18 09:04:40.223175761 +0000 UTC m=+33.237434527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig") pod "oauth-openshift-6dd57c659d-b5n72" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa") : configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:39.724399 master-0 kubenswrapper[28766]: I0318 09:04:39.724368 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-error\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.724802 master-0 kubenswrapper[28766]: I0318 09:04:39.724759 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-session\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.724896 master-0 kubenswrapper[28766]: I0318 09:04:39.724869 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-login\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.725282 master-0 kubenswrapper[28766]: I0318 09:04:39.725228 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.725973 master-0 kubenswrapper[28766]: I0318 09:04:39.725956 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.727483 master-0 kubenswrapper[28766]: I0318 09:04:39.727443 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.728325 master-0 kubenswrapper[28766]: I0318 09:04:39.728291 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:39.743898 master-0 kubenswrapper[28766]: I0318 09:04:39.743840 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44lb\" (UniqueName: \"kubernetes.io/projected/49162cd5-4038-4e1b-bbd2-26fbdace96aa-kube-api-access-w44lb\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:40.233121 master-0 kubenswrapper[28766]: I0318 09:04:40.230939 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:40.233121 master-0 kubenswrapper[28766]: E0318 09:04:40.231276 28766 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:40.233121 master-0 kubenswrapper[28766]: E0318 09:04:40.231354 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig podName:49162cd5-4038-4e1b-bbd2-26fbdace96aa nodeName:}" failed. No retries permitted until 2026-03-18 09:04:41.231329508 +0000 UTC m=+34.245588194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig") pod "oauth-openshift-6dd57c659d-b5n72" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa") : configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:41.078453 master-0 kubenswrapper[28766]: I0318 09:04:41.078354 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:41.084065 master-0 kubenswrapper[28766]: I0318 09:04:41.083519 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:04:41.247964 master-0 kubenswrapper[28766]: I0318 09:04:41.247898 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:41.248227 master-0 kubenswrapper[28766]: E0318 09:04:41.248021 28766 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:41.248227 master-0 kubenswrapper[28766]: E0318 09:04:41.248084 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig podName:49162cd5-4038-4e1b-bbd2-26fbdace96aa nodeName:}" failed. No retries permitted until 2026-03-18 09:04:43.248068254 +0000 UTC m=+36.262326920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig") pod "oauth-openshift-6dd57c659d-b5n72" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa") : configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:42.796630 master-0 kubenswrapper[28766]: I0318 09:04:42.796549 28766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:04:42.797526 master-0 kubenswrapper[28766]: I0318 09:04:42.796875 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" containerID="cri-o://3fca4409620121a7f43cfb37414e381868422175702286563fa7900f579aad87" gracePeriod=5 Mar 18 09:04:43.280045 master-0 kubenswrapper[28766]: I0318 09:04:43.279995 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:43.280668 master-0 kubenswrapper[28766]: I0318 09:04:43.280639 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dd57c659d-b5n72\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:43.427530 master-0 kubenswrapper[28766]: I0318 09:04:43.427466 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:43.743873 master-0 kubenswrapper[28766]: I0318 09:04:43.743770 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6dd57c659d-b5n72"] Mar 18 09:04:43.870102 master-0 kubenswrapper[28766]: I0318 09:04:43.869968 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" event={"ID":"49162cd5-4038-4e1b-bbd2-26fbdace96aa","Type":"ContainerStarted","Data":"c8332aa368fd736bbc1e61a8c1d6ac3346d4454be3104b128284b622d9a88886"} Mar 18 09:04:46.539877 master-0 kubenswrapper[28766]: I0318 09:04:46.539422 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8"] Mar 18 09:04:46.539877 master-0 kubenswrapper[28766]: E0318 09:04:46.539756 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 09:04:46.539877 master-0 kubenswrapper[28766]: I0318 09:04:46.539772 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 09:04:46.540656 master-0 kubenswrapper[28766]: I0318 09:04:46.539949 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 09:04:46.540656 master-0 kubenswrapper[28766]: I0318 09:04:46.540496 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" Mar 18 09:04:46.545882 master-0 kubenswrapper[28766]: I0318 09:04:46.545033 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-prkn7" Mar 18 09:04:46.545882 master-0 kubenswrapper[28766]: I0318 09:04:46.545050 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 09:04:46.587877 master-0 kubenswrapper[28766]: I0318 09:04:46.586824 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8"] Mar 18 09:04:46.637874 master-0 kubenswrapper[28766]: I0318 09:04:46.637530 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb-monitoring-plugin-cert\") pod \"monitoring-plugin-9f7b5f8d5-t5nk8\" (UID: \"b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb\") " pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" Mar 18 09:04:46.738880 master-0 kubenswrapper[28766]: I0318 09:04:46.738617 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb-monitoring-plugin-cert\") pod \"monitoring-plugin-9f7b5f8d5-t5nk8\" (UID: \"b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb\") " pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" Mar 18 09:04:46.759879 master-0 kubenswrapper[28766]: I0318 09:04:46.759092 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb-monitoring-plugin-cert\") pod \"monitoring-plugin-9f7b5f8d5-t5nk8\" (UID: \"b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb\") " pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" Mar 18 09:04:46.873611 master-0 kubenswrapper[28766]: I0318 09:04:46.873480 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" Mar 18 09:04:47.384324 master-0 kubenswrapper[28766]: I0318 09:04:47.384261 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8"] Mar 18 09:04:47.393098 master-0 kubenswrapper[28766]: W0318 09:04:47.393035 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0fd6d5a_c72f_4c6d_ad2a_5425fb010fcb.slice/crio-774fb1558407952442419bbcfd97f215aadcbb38e81445ede8bd284ce89576a8 WatchSource:0}: Error finding container 774fb1558407952442419bbcfd97f215aadcbb38e81445ede8bd284ce89576a8: Status 404 returned error can't find the container with id 774fb1558407952442419bbcfd97f215aadcbb38e81445ede8bd284ce89576a8 Mar 18 09:04:47.899482 master-0 kubenswrapper[28766]: I0318 09:04:47.899384 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" event={"ID":"b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb","Type":"ContainerStarted","Data":"774fb1558407952442419bbcfd97f215aadcbb38e81445ede8bd284ce89576a8"} Mar 18 09:04:47.901668 master-0 kubenswrapper[28766]: I0318 09:04:47.901567 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 09:04:47.901668 master-0 kubenswrapper[28766]: I0318 09:04:47.901632 28766 generic.go:334] "Generic (PLEG): container finished" podID="8e7a82869988463543d3d8dd1f0b5fe3" containerID="3fca4409620121a7f43cfb37414e381868422175702286563fa7900f579aad87" exitCode=137 Mar 18 09:04:47.907244 master-0 kubenswrapper[28766]: I0318 09:04:47.906758 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" event={"ID":"49162cd5-4038-4e1b-bbd2-26fbdace96aa","Type":"ContainerStarted","Data":"642f6822ad953d936ce5231469d83c2c8abd87f3dda405f474692b8e182b9839"} Mar 18 09:04:47.907685 master-0 kubenswrapper[28766]: I0318 09:04:47.907600 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:47.924720 master-0 kubenswrapper[28766]: I0318 09:04:47.921546 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:04:47.940316 master-0 kubenswrapper[28766]: I0318 09:04:47.939869 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" podStartSLOduration=5.801123707 podStartE2EDuration="8.939826903s" podCreationTimestamp="2026-03-18 09:04:39 +0000 UTC" firstStartedPulling="2026-03-18 09:04:43.78618471 +0000 UTC m=+36.800443376" lastFinishedPulling="2026-03-18 09:04:46.924887906 +0000 UTC m=+39.939146572" observedRunningTime="2026-03-18 09:04:47.937804728 +0000 UTC m=+40.952063404" watchObservedRunningTime="2026-03-18 09:04:47.939826903 +0000 UTC m=+40.954085569" Mar 18 09:04:48.368845 master-0 kubenswrapper[28766]: I0318 09:04:48.368810 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 09:04:48.369145 master-0 kubenswrapper[28766]: I0318 09:04:48.368902 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:48.466933 master-0 kubenswrapper[28766]: I0318 09:04:48.466863 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 09:04:48.466933 master-0 kubenswrapper[28766]: I0318 09:04:48.466928 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 09:04:48.467202 master-0 kubenswrapper[28766]: I0318 09:04:48.467079 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 09:04:48.467202 master-0 kubenswrapper[28766]: I0318 09:04:48.467133 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 09:04:48.467202 master-0 kubenswrapper[28766]: I0318 09:04:48.467156 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 09:04:48.467293 master-0 kubenswrapper[28766]: I0318 09:04:48.467221 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests" (OuterVolumeSpecName: "manifests") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:48.467324 master-0 kubenswrapper[28766]: I0318 09:04:48.467287 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log" (OuterVolumeSpecName: "var-log") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:48.467387 master-0 kubenswrapper[28766]: I0318 09:04:48.467355 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:48.468005 master-0 kubenswrapper[28766]: I0318 09:04:48.467954 28766 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:48.468005 master-0 kubenswrapper[28766]: I0318 09:04:48.467990 28766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:48.468005 master-0 kubenswrapper[28766]: I0318 09:04:48.468001 28766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:48.468168 master-0 kubenswrapper[28766]: I0318 09:04:48.468049 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock" (OuterVolumeSpecName: "var-lock") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:48.473967 master-0 kubenswrapper[28766]: I0318 09:04:48.473913 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:48.569305 master-0 kubenswrapper[28766]: I0318 09:04:48.569198 28766 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:48.569305 master-0 kubenswrapper[28766]: I0318 09:04:48.569273 28766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:48.915541 master-0 kubenswrapper[28766]: I0318 09:04:48.915500 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 09:04:48.916130 master-0 kubenswrapper[28766]: I0318 09:04:48.915634 28766 scope.go:117] "RemoveContainer" containerID="3fca4409620121a7f43cfb37414e381868422175702286563fa7900f579aad87" Mar 18 09:04:48.916130 master-0 kubenswrapper[28766]: I0318 09:04:48.915683 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:49.246714 master-0 kubenswrapper[28766]: I0318 09:04:49.246647 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7a82869988463543d3d8dd1f0b5fe3" path="/var/lib/kubelet/pods/8e7a82869988463543d3d8dd1f0b5fe3/volumes" Mar 18 09:04:50.836977 master-0 kubenswrapper[28766]: I0318 09:04:50.836880 28766 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","pode0d127be-2d13-449b-915b-2d49052baf02"] err="unable to destroy cgroup paths for cgroup [kubepods pode0d127be-2d13-449b-915b-2d49052baf02] : Timed out while waiting for systemd to remove kubepods-pode0d127be_2d13_449b_915b_2d49052baf02.slice" Mar 18 09:04:51.947463 master-0 kubenswrapper[28766]: I0318 09:04:51.947397 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" event={"ID":"b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb","Type":"ContainerStarted","Data":"6257999643e7ff9aaf91ebbda81c9453ea1703008d192fce355e3aec3472413e"} Mar 18 09:04:51.948386 master-0 kubenswrapper[28766]: I0318 09:04:51.948096 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" Mar 18 09:04:51.954695 master-0 kubenswrapper[28766]: I0318 09:04:51.954665 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" Mar 18 09:04:51.976331 master-0 kubenswrapper[28766]: I0318 09:04:51.976248 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-9f7b5f8d5-t5nk8" podStartSLOduration=2.469460531 podStartE2EDuration="5.976195939s" podCreationTimestamp="2026-03-18 09:04:46 +0000 UTC" firstStartedPulling="2026-03-18 09:04:47.39528867 +0000 UTC m=+40.409547356" lastFinishedPulling="2026-03-18 09:04:50.902024098 +0000 UTC m=+43.916282764" observedRunningTime="2026-03-18 09:04:51.971598665 +0000 UTC m=+44.985857351" watchObservedRunningTime="2026-03-18 09:04:51.976195939 +0000 UTC m=+44.990454605" Mar 18 09:04:52.518715 master-0 kubenswrapper[28766]: I0318 09:04:52.518655 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:54.233263 master-0 kubenswrapper[28766]: I0318 09:04:54.233211 28766 scope.go:117] "RemoveContainer" containerID="336d013c9841ac504e593a5fd8d356f16422167dab08d0fec9503ddf83d85897" Mar 18 09:04:54.970780 master-0 kubenswrapper[28766]: I0318 09:04:54.970747 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-hmnwh_c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075/console-operator/1.log" Mar 18 09:04:54.971148 master-0 kubenswrapper[28766]: I0318 09:04:54.971120 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" event={"ID":"c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075","Type":"ContainerStarted","Data":"6fe62c49e9ec8ae7ed27faaa5d81b91d6a1af2ca77e0104acc28ab708886b9d7"} Mar 18 09:04:54.972583 master-0 kubenswrapper[28766]: I0318 09:04:54.972554 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:55.014406 master-0 kubenswrapper[28766]: I0318 09:04:55.014299 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" podStartSLOduration=24.138505091 podStartE2EDuration="27.01427885s" podCreationTimestamp="2026-03-18 09:04:28 +0000 UTC" firstStartedPulling="2026-03-18 09:04:29.908601994 +0000 UTC m=+22.922860700" lastFinishedPulling="2026-03-18 09:04:32.784375793 +0000 UTC m=+25.798634459" observedRunningTime="2026-03-18 09:04:55.008282008 +0000 UTC m=+48.022540684" watchObservedRunningTime="2026-03-18 09:04:55.01427885 +0000 UTC m=+48.028537526" Mar 18 09:04:55.150766 master-0 kubenswrapper[28766]: I0318 09:04:55.150702 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-hmnwh" Mar 18 09:04:55.349738 master-0 kubenswrapper[28766]: I0318 09:04:55.349653 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-mjnxk"] Mar 18 09:04:55.350798 master-0 kubenswrapper[28766]: I0318 09:04:55.350766 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-mjnxk" Mar 18 09:04:55.354656 master-0 kubenswrapper[28766]: I0318 09:04:55.354630 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 09:04:55.354990 master-0 kubenswrapper[28766]: I0318 09:04:55.354977 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-qpbvp" Mar 18 09:04:55.355248 master-0 kubenswrapper[28766]: I0318 09:04:55.355235 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 09:04:55.373528 master-0 kubenswrapper[28766]: I0318 09:04:55.373475 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-mjnxk"] Mar 18 09:04:55.507385 master-0 kubenswrapper[28766]: I0318 09:04:55.507329 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vpb\" (UniqueName: \"kubernetes.io/projected/0aeda1f0-6438-4d96-becd-e0cd833e99d5-kube-api-access-l8vpb\") pod \"downloads-66b8ffb895-mjnxk\" (UID: \"0aeda1f0-6438-4d96-becd-e0cd833e99d5\") " pod="openshift-console/downloads-66b8ffb895-mjnxk" Mar 18 09:04:55.608971 master-0 kubenswrapper[28766]: I0318 09:04:55.608806 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vpb\" (UniqueName: \"kubernetes.io/projected/0aeda1f0-6438-4d96-becd-e0cd833e99d5-kube-api-access-l8vpb\") pod \"downloads-66b8ffb895-mjnxk\" (UID: \"0aeda1f0-6438-4d96-becd-e0cd833e99d5\") " pod="openshift-console/downloads-66b8ffb895-mjnxk" Mar 18 09:04:55.626578 master-0 kubenswrapper[28766]: I0318 09:04:55.626527 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vpb\" (UniqueName: \"kubernetes.io/projected/0aeda1f0-6438-4d96-becd-e0cd833e99d5-kube-api-access-l8vpb\") pod \"downloads-66b8ffb895-mjnxk\" (UID: \"0aeda1f0-6438-4d96-becd-e0cd833e99d5\") " pod="openshift-console/downloads-66b8ffb895-mjnxk" Mar 18 09:04:55.673185 master-0 kubenswrapper[28766]: I0318 09:04:55.673124 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-mjnxk" Mar 18 09:04:56.125046 master-0 kubenswrapper[28766]: I0318 09:04:56.124951 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-mjnxk"] Mar 18 09:04:56.136143 master-0 kubenswrapper[28766]: W0318 09:04:56.136100 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aeda1f0_6438_4d96_becd_e0cd833e99d5.slice/crio-80f99e28905a991ce032adf8f1429c5d0362faf13669fe6062848a489419ca88 WatchSource:0}: Error finding container 80f99e28905a991ce032adf8f1429c5d0362faf13669fe6062848a489419ca88: Status 404 returned error can't find the container with id 80f99e28905a991ce032adf8f1429c5d0362faf13669fe6062848a489419ca88 Mar 18 09:04:56.993777 master-0 kubenswrapper[28766]: I0318 09:04:56.993594 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-mjnxk" event={"ID":"0aeda1f0-6438-4d96-becd-e0cd833e99d5","Type":"ContainerStarted","Data":"80f99e28905a991ce032adf8f1429c5d0362faf13669fe6062848a489419ca88"} Mar 18 09:04:57.342874 master-0 kubenswrapper[28766]: I0318 09:04:57.342686 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6448dc88d8-cnd9q"] Mar 18 09:04:57.343293 master-0 kubenswrapper[28766]: I0318 09:04:57.343037 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" containerID="cri-o://06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c" gracePeriod=30 Mar 18 09:04:57.453915 master-0 kubenswrapper[28766]: I0318 09:04:57.451427 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp"] Mar 18 09:04:57.453915 master-0 kubenswrapper[28766]: I0318 09:04:57.451708 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" podUID="04e23989-853e-4b49-ba0f-1961d64ae3a3" containerName="route-controller-manager" containerID="cri-o://750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc" gracePeriod=30 Mar 18 09:04:57.872972 master-0 kubenswrapper[28766]: I0318 09:04:57.872919 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:57.946619 master-0 kubenswrapper[28766]: I0318 09:04:57.946541 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert\") pod \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " Mar 18 09:04:57.947140 master-0 kubenswrapper[28766]: I0318 09:04:57.946692 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles\") pod \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " Mar 18 09:04:57.947140 master-0 kubenswrapper[28766]: I0318 09:04:57.946784 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca\") pod \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " Mar 18 09:04:57.947140 master-0 kubenswrapper[28766]: I0318 09:04:57.946877 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config\") pod \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " Mar 18 09:04:57.947363 master-0 kubenswrapper[28766]: I0318 09:04:57.947164 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m5wf\" (UniqueName: \"kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf\") pod \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\" (UID: \"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75\") " Mar 18 09:04:57.947363 master-0 kubenswrapper[28766]: I0318 09:04:57.946560 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:57.947820 master-0 kubenswrapper[28766]: I0318 09:04:57.947763 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca" (OuterVolumeSpecName: "client-ca") pod "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" (UID: "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:04:57.947994 master-0 kubenswrapper[28766]: I0318 09:04:57.947915 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" (UID: "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:04:57.948518 master-0 kubenswrapper[28766]: I0318 09:04:57.948395 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config" (OuterVolumeSpecName: "config") pod "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" (UID: "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:04:57.949423 master-0 kubenswrapper[28766]: I0318 09:04:57.949395 28766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:57.949423 master-0 kubenswrapper[28766]: I0318 09:04:57.949419 28766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:57.949423 master-0 kubenswrapper[28766]: I0318 09:04:57.949429 28766 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:57.950961 master-0 kubenswrapper[28766]: I0318 09:04:57.950905 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" (UID: "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:04:57.954509 master-0 kubenswrapper[28766]: I0318 09:04:57.954403 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf" (OuterVolumeSpecName: "kube-api-access-2m5wf") pod "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" (UID: "4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75"). InnerVolumeSpecName "kube-api-access-2m5wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:58.010497 master-0 kubenswrapper[28766]: I0318 09:04:58.009443 28766 generic.go:334] "Generic (PLEG): container finished" podID="04e23989-853e-4b49-ba0f-1961d64ae3a3" containerID="750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc" exitCode=0 Mar 18 09:04:58.010497 master-0 kubenswrapper[28766]: I0318 09:04:58.009555 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" event={"ID":"04e23989-853e-4b49-ba0f-1961d64ae3a3","Type":"ContainerDied","Data":"750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc"} Mar 18 09:04:58.010497 master-0 kubenswrapper[28766]: I0318 09:04:58.009600 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" event={"ID":"04e23989-853e-4b49-ba0f-1961d64ae3a3","Type":"ContainerDied","Data":"35bb7224fe9eca618f0100241589daaf5b90ad54413934d086e067f2a229eae2"} Mar 18 09:04:58.010497 master-0 kubenswrapper[28766]: I0318 09:04:58.009625 28766 scope.go:117] "RemoveContainer" containerID="750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc" Mar 18 09:04:58.010497 master-0 kubenswrapper[28766]: I0318 09:04:58.009539 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp" Mar 18 09:04:58.020827 master-0 kubenswrapper[28766]: I0318 09:04:58.018435 28766 generic.go:334] "Generic (PLEG): container finished" podID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerID="06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c" exitCode=0 Mar 18 09:04:58.020827 master-0 kubenswrapper[28766]: I0318 09:04:58.018535 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" Mar 18 09:04:58.020827 master-0 kubenswrapper[28766]: I0318 09:04:58.018535 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" event={"ID":"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75","Type":"ContainerDied","Data":"06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c"} Mar 18 09:04:58.020827 master-0 kubenswrapper[28766]: I0318 09:04:58.018722 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6448dc88d8-cnd9q" event={"ID":"4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75","Type":"ContainerDied","Data":"d52b6a2cf90645c7d7adbd4e26631b5105d0e2c63496bcbe09fc57752e328d79"} Mar 18 09:04:58.035150 master-0 kubenswrapper[28766]: I0318 09:04:58.035089 28766 scope.go:117] "RemoveContainer" containerID="750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc" Mar 18 09:04:58.035695 master-0 kubenswrapper[28766]: E0318 09:04:58.035638 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc\": container with ID starting with 750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc not found: ID does not exist" containerID="750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc" Mar 18 09:04:58.035790 master-0 kubenswrapper[28766]: I0318 09:04:58.035718 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc"} err="failed to get container status \"750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc\": rpc error: code = NotFound desc = could not find container \"750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc\": container with ID starting with 750047c7c110d6b292474a23cfe2eb52c226ed85a95e9ef9327042e06e4908dc not found: ID does not exist" Mar 18 09:04:58.035865 master-0 kubenswrapper[28766]: I0318 09:04:58.035797 28766 scope.go:117] "RemoveContainer" containerID="06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c" Mar 18 09:04:58.051216 master-0 kubenswrapper[28766]: I0318 09:04:58.051128 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config\") pod \"04e23989-853e-4b49-ba0f-1961d64ae3a3\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " Mar 18 09:04:58.051445 master-0 kubenswrapper[28766]: I0318 09:04:58.051266 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca\") pod \"04e23989-853e-4b49-ba0f-1961d64ae3a3\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " Mar 18 09:04:58.051677 master-0 kubenswrapper[28766]: I0318 09:04:58.051603 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwsfl\" (UniqueName: \"kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl\") pod \"04e23989-853e-4b49-ba0f-1961d64ae3a3\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " Mar 18 09:04:58.051820 master-0 kubenswrapper[28766]: I0318 09:04:58.051790 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert\") pod \"04e23989-853e-4b49-ba0f-1961d64ae3a3\" (UID: \"04e23989-853e-4b49-ba0f-1961d64ae3a3\") " Mar 18 09:04:58.052044 master-0 kubenswrapper[28766]: I0318 09:04:58.051986 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config" (OuterVolumeSpecName: "config") pod "04e23989-853e-4b49-ba0f-1961d64ae3a3" (UID: "04e23989-853e-4b49-ba0f-1961d64ae3a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:04:58.052137 master-0 kubenswrapper[28766]: I0318 09:04:58.052066 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca" (OuterVolumeSpecName: "client-ca") pod "04e23989-853e-4b49-ba0f-1961d64ae3a3" (UID: "04e23989-853e-4b49-ba0f-1961d64ae3a3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:04:58.052388 master-0 kubenswrapper[28766]: I0318 09:04:58.052355 28766 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:58.052388 master-0 kubenswrapper[28766]: I0318 09:04:58.052382 28766 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04e23989-853e-4b49-ba0f-1961d64ae3a3-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:58.052492 master-0 kubenswrapper[28766]: I0318 09:04:58.052402 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m5wf\" (UniqueName: \"kubernetes.io/projected/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-kube-api-access-2m5wf\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:58.052492 master-0 kubenswrapper[28766]: I0318 09:04:58.052417 28766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:58.066370 master-0 kubenswrapper[28766]: I0318 09:04:58.066306 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl" (OuterVolumeSpecName: "kube-api-access-qwsfl") pod "04e23989-853e-4b49-ba0f-1961d64ae3a3" (UID: "04e23989-853e-4b49-ba0f-1961d64ae3a3"). InnerVolumeSpecName "kube-api-access-qwsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:58.072368 master-0 kubenswrapper[28766]: I0318 09:04:58.069230 28766 scope.go:117] "RemoveContainer" containerID="c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882" Mar 18 09:04:58.072368 master-0 kubenswrapper[28766]: I0318 09:04:58.071638 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6448dc88d8-cnd9q"] Mar 18 09:04:58.073983 master-0 kubenswrapper[28766]: I0318 09:04:58.073880 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04e23989-853e-4b49-ba0f-1961d64ae3a3" (UID: "04e23989-853e-4b49-ba0f-1961d64ae3a3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:04:58.078092 master-0 kubenswrapper[28766]: I0318 09:04:58.078047 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6448dc88d8-cnd9q"] Mar 18 09:04:58.093830 master-0 kubenswrapper[28766]: I0318 09:04:58.093778 28766 scope.go:117] "RemoveContainer" containerID="06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c" Mar 18 09:04:58.094634 master-0 kubenswrapper[28766]: E0318 09:04:58.094597 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c\": container with ID starting with 06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c not found: ID does not exist" containerID="06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c" Mar 18 09:04:58.094707 master-0 kubenswrapper[28766]: I0318 09:04:58.094644 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c"} err="failed to get container status \"06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c\": rpc error: code = NotFound desc = could not find container \"06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c\": container with ID starting with 06e4ded156520e1a9b65d50f0935234c2ea91c89d6f3a493daf8d002e409884c not found: ID does not exist" Mar 18 09:04:58.094707 master-0 kubenswrapper[28766]: I0318 09:04:58.094676 28766 scope.go:117] "RemoveContainer" containerID="c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882" Mar 18 09:04:58.095178 master-0 kubenswrapper[28766]: E0318 09:04:58.095125 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882\": container with ID starting with c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882 not found: ID does not exist" containerID="c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882" Mar 18 09:04:58.095288 master-0 kubenswrapper[28766]: I0318 09:04:58.095231 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882"} err="failed to get container status \"c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882\": rpc error: code = NotFound desc = could not find container \"c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882\": container with ID starting with c1000328fdb806ec77d49cec50c1824461d4c39b599af7554159ee64748ea882 not found: ID does not exist" Mar 18 09:04:58.153812 master-0 kubenswrapper[28766]: I0318 09:04:58.153757 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwsfl\" (UniqueName: \"kubernetes.io/projected/04e23989-853e-4b49-ba0f-1961d64ae3a3-kube-api-access-qwsfl\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:58.153812 master-0 kubenswrapper[28766]: I0318 09:04:58.153803 28766 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04e23989-853e-4b49-ba0f-1961d64ae3a3-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:58.350118 master-0 kubenswrapper[28766]: I0318 09:04:58.349995 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp"] Mar 18 09:04:58.352380 master-0 kubenswrapper[28766]: I0318 09:04:58.352326 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75749f878-qxnvp"] Mar 18 09:04:59.051342 master-0 kubenswrapper[28766]: I0318 09:04:59.051266 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fffd64487-stp97"] Mar 18 09:04:59.052063 master-0 kubenswrapper[28766]: E0318 09:04:59.051657 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" Mar 18 09:04:59.052063 master-0 kubenswrapper[28766]: I0318 09:04:59.051674 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" Mar 18 09:04:59.052063 master-0 kubenswrapper[28766]: E0318 09:04:59.051702 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e23989-853e-4b49-ba0f-1961d64ae3a3" containerName="route-controller-manager" Mar 18 09:04:59.052063 master-0 kubenswrapper[28766]: I0318 09:04:59.051708 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e23989-853e-4b49-ba0f-1961d64ae3a3" containerName="route-controller-manager" Mar 18 09:04:59.052063 master-0 kubenswrapper[28766]: I0318 09:04:59.051876 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" Mar 18 09:04:59.052063 master-0 kubenswrapper[28766]: I0318 09:04:59.051905 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e23989-853e-4b49-ba0f-1961d64ae3a3" containerName="route-controller-manager" Mar 18 09:04:59.052379 master-0 kubenswrapper[28766]: I0318 09:04:59.052348 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8"] Mar 18 09:04:59.052519 master-0 kubenswrapper[28766]: E0318 09:04:59.052489 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" Mar 18 09:04:59.052519 master-0 kubenswrapper[28766]: I0318 09:04:59.052508 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" Mar 18 09:04:59.052679 master-0 kubenswrapper[28766]: I0318 09:04:59.052609 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.052780 master-0 kubenswrapper[28766]: I0318 09:04:59.052628 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" containerName="controller-manager" Mar 18 09:04:59.055070 master-0 kubenswrapper[28766]: I0318 09:04:59.053156 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.055070 master-0 kubenswrapper[28766]: I0318 09:04:59.054752 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rtlhv" Mar 18 09:04:59.059943 master-0 kubenswrapper[28766]: I0318 09:04:59.059725 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:04:59.059943 master-0 kubenswrapper[28766]: I0318 09:04:59.059883 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:04:59.060189 master-0 kubenswrapper[28766]: I0318 09:04:59.060098 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9s29d" Mar 18 09:04:59.060189 master-0 kubenswrapper[28766]: I0318 09:04:59.060162 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:04:59.060399 master-0 kubenswrapper[28766]: I0318 09:04:59.060160 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:04:59.061451 master-0 kubenswrapper[28766]: I0318 09:04:59.060645 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:04:59.061451 master-0 kubenswrapper[28766]: I0318 09:04:59.060771 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:04:59.061451 master-0 kubenswrapper[28766]: I0318 09:04:59.060814 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:04:59.061451 master-0 kubenswrapper[28766]: I0318 09:04:59.060970 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:04:59.061451 master-0 kubenswrapper[28766]: I0318 09:04:59.061104 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:04:59.061451 master-0 kubenswrapper[28766]: I0318 09:04:59.061454 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:04:59.062738 master-0 kubenswrapper[28766]: I0318 09:04:59.062678 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8"] Mar 18 09:04:59.070357 master-0 kubenswrapper[28766]: I0318 09:04:59.070252 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:04:59.072392 master-0 kubenswrapper[28766]: I0318 09:04:59.072365 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fffd64487-stp97"] Mar 18 09:04:59.169132 master-0 kubenswrapper[28766]: I0318 09:04:59.169066 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-proxy-ca-bundles\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.169369 master-0 kubenswrapper[28766]: I0318 09:04:59.169142 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f411db3-bcee-46de-9439-815d01550e49-client-ca\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.169369 master-0 kubenswrapper[28766]: I0318 09:04:59.169208 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-config\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.169369 master-0 kubenswrapper[28766]: I0318 09:04:59.169256 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b9kb\" (UniqueName: \"kubernetes.io/projected/0f411db3-bcee-46de-9439-815d01550e49-kube-api-access-8b9kb\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.169469 master-0 kubenswrapper[28766]: I0318 09:04:59.169389 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd991d2-65f9-4680-8a32-387ba6cec008-serving-cert\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.169469 master-0 kubenswrapper[28766]: I0318 09:04:59.169438 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f411db3-bcee-46de-9439-815d01550e49-serving-cert\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.169572 master-0 kubenswrapper[28766]: I0318 09:04:59.169537 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsw4z\" (UniqueName: \"kubernetes.io/projected/4cd991d2-65f9-4680-8a32-387ba6cec008-kube-api-access-nsw4z\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.169615 master-0 kubenswrapper[28766]: I0318 09:04:59.169581 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-client-ca\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.169655 master-0 kubenswrapper[28766]: I0318 09:04:59.169611 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f411db3-bcee-46de-9439-815d01550e49-config\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.246559 master-0 kubenswrapper[28766]: I0318 09:04:59.246496 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e23989-853e-4b49-ba0f-1961d64ae3a3" path="/var/lib/kubelet/pods/04e23989-853e-4b49-ba0f-1961d64ae3a3/volumes" Mar 18 09:04:59.247377 master-0 kubenswrapper[28766]: I0318 09:04:59.247355 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75" path="/var/lib/kubelet/pods/4cc14de2-59e8-49e1-9eeb-f87c5e9d8a75/volumes" Mar 18 09:04:59.270842 master-0 kubenswrapper[28766]: I0318 09:04:59.270781 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-config\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.270842 master-0 kubenswrapper[28766]: I0318 09:04:59.270866 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b9kb\" (UniqueName: \"kubernetes.io/projected/0f411db3-bcee-46de-9439-815d01550e49-kube-api-access-8b9kb\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.271235 master-0 kubenswrapper[28766]: I0318 09:04:59.270900 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd991d2-65f9-4680-8a32-387ba6cec008-serving-cert\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.271235 master-0 kubenswrapper[28766]: I0318 09:04:59.271208 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f411db3-bcee-46de-9439-815d01550e49-serving-cert\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.271425 master-0 kubenswrapper[28766]: I0318 09:04:59.271261 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsw4z\" (UniqueName: \"kubernetes.io/projected/4cd991d2-65f9-4680-8a32-387ba6cec008-kube-api-access-nsw4z\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.271425 master-0 kubenswrapper[28766]: I0318 09:04:59.271288 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-client-ca\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.271425 master-0 kubenswrapper[28766]: I0318 09:04:59.271317 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f411db3-bcee-46de-9439-815d01550e49-config\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.271425 master-0 kubenswrapper[28766]: I0318 09:04:59.271356 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-proxy-ca-bundles\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.271425 master-0 kubenswrapper[28766]: I0318 09:04:59.271400 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f411db3-bcee-46de-9439-815d01550e49-client-ca\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.272773 master-0 kubenswrapper[28766]: I0318 09:04:59.272742 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-client-ca\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.273034 master-0 kubenswrapper[28766]: I0318 09:04:59.272995 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-config\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.273098 master-0 kubenswrapper[28766]: I0318 09:04:59.273006 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f411db3-bcee-46de-9439-815d01550e49-config\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.273164 master-0 kubenswrapper[28766]: I0318 09:04:59.273140 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f411db3-bcee-46de-9439-815d01550e49-client-ca\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.273574 master-0 kubenswrapper[28766]: I0318 09:04:59.273530 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cd991d2-65f9-4680-8a32-387ba6cec008-proxy-ca-bundles\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.286345 master-0 kubenswrapper[28766]: I0318 09:04:59.279381 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f411db3-bcee-46de-9439-815d01550e49-serving-cert\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.286345 master-0 kubenswrapper[28766]: I0318 09:04:59.283666 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd991d2-65f9-4680-8a32-387ba6cec008-serving-cert\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.295190 master-0 kubenswrapper[28766]: I0318 09:04:59.293721 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsw4z\" (UniqueName: \"kubernetes.io/projected/4cd991d2-65f9-4680-8a32-387ba6cec008-kube-api-access-nsw4z\") pod \"controller-manager-7fffd64487-stp97\" (UID: \"4cd991d2-65f9-4680-8a32-387ba6cec008\") " pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.295418 master-0 kubenswrapper[28766]: I0318 09:04:59.295287 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b9kb\" (UniqueName: \"kubernetes.io/projected/0f411db3-bcee-46de-9439-815d01550e49-kube-api-access-8b9kb\") pod \"route-controller-manager-74c57dc89b-mbtl8\" (UID: \"0f411db3-bcee-46de-9439-815d01550e49\") " pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.386581 master-0 kubenswrapper[28766]: I0318 09:04:59.386437 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:04:59.412883 master-0 kubenswrapper[28766]: I0318 09:04:59.406238 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:04:59.875841 master-0 kubenswrapper[28766]: I0318 09:04:59.875801 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fffd64487-stp97"] Mar 18 09:04:59.889489 master-0 kubenswrapper[28766]: W0318 09:04:59.889439 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cd991d2_65f9_4680_8a32_387ba6cec008.slice/crio-bc6d8eab0adab55c7cc07fb21e2f5d9bc8a4c3253282a1c627d4238dced0d5a3 WatchSource:0}: Error finding container bc6d8eab0adab55c7cc07fb21e2f5d9bc8a4c3253282a1c627d4238dced0d5a3: Status 404 returned error can't find the container with id bc6d8eab0adab55c7cc07fb21e2f5d9bc8a4c3253282a1c627d4238dced0d5a3 Mar 18 09:04:59.962938 master-0 kubenswrapper[28766]: I0318 09:04:59.961949 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8"] Mar 18 09:05:00.046982 master-0 kubenswrapper[28766]: I0318 09:05:00.046926 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" event={"ID":"0f411db3-bcee-46de-9439-815d01550e49","Type":"ContainerStarted","Data":"980fb38fa580903520d04fe2d6fe9a941a39f22734baca1d634a72ad2f66207e"} Mar 18 09:05:00.049623 master-0 kubenswrapper[28766]: I0318 09:05:00.049569 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" event={"ID":"4cd991d2-65f9-4680-8a32-387ba6cec008","Type":"ContainerStarted","Data":"7f6b622b9aaccd5bfd7ac01a4deee2979a62da5a7998eb957533e3f2fae0216f"} Mar 18 09:05:00.049715 master-0 kubenswrapper[28766]: I0318 09:05:00.049624 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" event={"ID":"4cd991d2-65f9-4680-8a32-387ba6cec008","Type":"ContainerStarted","Data":"bc6d8eab0adab55c7cc07fb21e2f5d9bc8a4c3253282a1c627d4238dced0d5a3"} Mar 18 09:05:00.050041 master-0 kubenswrapper[28766]: I0318 09:05:00.050001 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:05:00.052984 master-0 kubenswrapper[28766]: I0318 09:05:00.052939 28766 patch_prober.go:28] interesting pod/controller-manager-7fffd64487-stp97 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.93:8443/healthz\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 18 09:05:00.061338 master-0 kubenswrapper[28766]: I0318 09:05:00.052991 28766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" podUID="4cd991d2-65f9-4680-8a32-387ba6cec008" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.93:8443/healthz\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 18 09:05:00.076569 master-0 kubenswrapper[28766]: I0318 09:05:00.076465 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" podStartSLOduration=3.076441288 podStartE2EDuration="3.076441288s" podCreationTimestamp="2026-03-18 09:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:00.075344528 +0000 UTC m=+53.089603214" watchObservedRunningTime="2026-03-18 09:05:00.076441288 +0000 UTC m=+53.090699954" Mar 18 09:05:01.003607 master-0 kubenswrapper[28766]: I0318 09:05:01.003521 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d57b58fd4-tcq7b"] Mar 18 09:05:01.004564 master-0 kubenswrapper[28766]: I0318 09:05:01.004524 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.016936 master-0 kubenswrapper[28766]: I0318 09:05:01.016844 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 09:05:01.017197 master-0 kubenswrapper[28766]: I0318 09:05:01.017089 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 09:05:01.017197 master-0 kubenswrapper[28766]: I0318 09:05:01.017181 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 09:05:01.017468 master-0 kubenswrapper[28766]: I0318 09:05:01.017436 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-dcx6f" Mar 18 09:05:01.017624 master-0 kubenswrapper[28766]: I0318 09:05:01.017572 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 09:05:01.025301 master-0 kubenswrapper[28766]: I0318 09:05:01.025243 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d57b58fd4-tcq7b"] Mar 18 09:05:01.032585 master-0 kubenswrapper[28766]: I0318 09:05:01.032537 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 09:05:01.074563 master-0 kubenswrapper[28766]: I0318 09:05:01.073084 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" event={"ID":"0f411db3-bcee-46de-9439-815d01550e49","Type":"ContainerStarted","Data":"fa3b1b53e9afa0604a40fbd60850a3b1c6dca6181d8298070ff0f9f0e7c1ca9f"} Mar 18 09:05:01.074563 master-0 kubenswrapper[28766]: I0318 09:05:01.074578 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:05:01.078236 master-0 kubenswrapper[28766]: I0318 09:05:01.078200 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fffd64487-stp97" Mar 18 09:05:01.079278 master-0 kubenswrapper[28766]: I0318 09:05:01.079234 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" Mar 18 09:05:01.102172 master-0 kubenswrapper[28766]: I0318 09:05:01.102109 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-serving-cert\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.102396 master-0 kubenswrapper[28766]: I0318 09:05:01.102234 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbn5\" (UniqueName: \"kubernetes.io/projected/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-kube-api-access-tgbn5\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.102396 master-0 kubenswrapper[28766]: I0318 09:05:01.102364 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-service-ca\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.102456 master-0 kubenswrapper[28766]: I0318 09:05:01.102394 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-oauth-config\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.102617 master-0 kubenswrapper[28766]: I0318 09:05:01.102572 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-config\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.102666 master-0 kubenswrapper[28766]: I0318 09:05:01.102652 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-oauth-serving-cert\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.103333 master-0 kubenswrapper[28766]: I0318 09:05:01.103274 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74c57dc89b-mbtl8" podStartSLOduration=4.103260778 podStartE2EDuration="4.103260778s" podCreationTimestamp="2026-03-18 09:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:01.100697678 +0000 UTC m=+54.114956344" watchObservedRunningTime="2026-03-18 09:05:01.103260778 +0000 UTC m=+54.117519444" Mar 18 09:05:01.204932 master-0 kubenswrapper[28766]: I0318 09:05:01.204844 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-config\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.205186 master-0 kubenswrapper[28766]: I0318 09:05:01.204948 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-oauth-serving-cert\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.205186 master-0 kubenswrapper[28766]: I0318 09:05:01.205051 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-serving-cert\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.205403 master-0 kubenswrapper[28766]: I0318 09:05:01.205343 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbn5\" (UniqueName: \"kubernetes.io/projected/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-kube-api-access-tgbn5\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.205490 master-0 kubenswrapper[28766]: I0318 09:05:01.205465 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-service-ca\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.205686 master-0 kubenswrapper[28766]: I0318 09:05:01.205638 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-config\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.205774 master-0 kubenswrapper[28766]: I0318 09:05:01.205749 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-oauth-config\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.206920 master-0 kubenswrapper[28766]: I0318 09:05:01.206865 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-oauth-serving-cert\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.208013 master-0 kubenswrapper[28766]: I0318 09:05:01.207972 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-service-ca\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.218103 master-0 kubenswrapper[28766]: I0318 09:05:01.218013 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-oauth-config\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.221395 master-0 kubenswrapper[28766]: I0318 09:05:01.221345 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-serving-cert\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.229559 master-0 kubenswrapper[28766]: I0318 09:05:01.229521 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbn5\" (UniqueName: \"kubernetes.io/projected/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-kube-api-access-tgbn5\") pod \"console-5d57b58fd4-tcq7b\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.333655 master-0 kubenswrapper[28766]: I0318 09:05:01.333518 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:01.801922 master-0 kubenswrapper[28766]: I0318 09:05:01.797626 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d57b58fd4-tcq7b"] Mar 18 09:05:01.819952 master-0 kubenswrapper[28766]: W0318 09:05:01.819892 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c577244_74c7_4a1c_8fec_0a89bd7e3ed1.slice/crio-af17e3beda13aae51d45aacc7a3397c8b0222a2b4a9d65440dc65d7ee9351292 WatchSource:0}: Error finding container af17e3beda13aae51d45aacc7a3397c8b0222a2b4a9d65440dc65d7ee9351292: Status 404 returned error can't find the container with id af17e3beda13aae51d45aacc7a3397c8b0222a2b4a9d65440dc65d7ee9351292 Mar 18 09:05:02.083701 master-0 kubenswrapper[28766]: I0318 09:05:02.083565 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d57b58fd4-tcq7b" event={"ID":"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1","Type":"ContainerStarted","Data":"af17e3beda13aae51d45aacc7a3397c8b0222a2b4a9d65440dc65d7ee9351292"} Mar 18 09:05:02.510735 master-0 kubenswrapper[28766]: I0318 09:05:02.509930 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6dd57c659d-b5n72"] Mar 18 09:05:07.166252 master-0 kubenswrapper[28766]: I0318 09:05:07.166155 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d57b58fd4-tcq7b" event={"ID":"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1","Type":"ContainerStarted","Data":"36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d"} Mar 18 09:05:07.262224 master-0 kubenswrapper[28766]: I0318 09:05:07.262101 28766 scope.go:117] "RemoveContainer" containerID="5c751dbb03b0e78f3ed7a9a2441228c32321443d29de48b1bf17ef0e83072bd3" Mar 18 09:05:07.264043 master-0 kubenswrapper[28766]: I0318 09:05:07.263765 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d57b58fd4-tcq7b" podStartSLOduration=2.5070364400000003 podStartE2EDuration="7.263751183s" podCreationTimestamp="2026-03-18 09:05:00 +0000 UTC" firstStartedPulling="2026-03-18 09:05:01.824123408 +0000 UTC m=+54.838382074" lastFinishedPulling="2026-03-18 09:05:06.580838151 +0000 UTC m=+59.595096817" observedRunningTime="2026-03-18 09:05:07.261147213 +0000 UTC m=+60.275405879" watchObservedRunningTime="2026-03-18 09:05:07.263751183 +0000 UTC m=+60.278009849" Mar 18 09:05:07.299310 master-0 kubenswrapper[28766]: I0318 09:05:07.295910 28766 scope.go:117] "RemoveContainer" containerID="9e36a51bcf12ae7db2a94f2fd56063ee6085dd854239e6802000e5e8cda9a85b" Mar 18 09:05:07.320075 master-0 kubenswrapper[28766]: I0318 09:05:07.319636 28766 scope.go:117] "RemoveContainer" containerID="965c96bceffdf0d2dfe6811ad54d4d08d2afc86948c8800b709c2385cc93d84e" Mar 18 09:05:11.208501 master-0 kubenswrapper[28766]: I0318 09:05:11.208316 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bd9677648-tq84g"] Mar 18 09:05:11.218867 master-0 kubenswrapper[28766]: I0318 09:05:11.210307 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.225637 master-0 kubenswrapper[28766]: I0318 09:05:11.225600 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 09:05:11.272994 master-0 kubenswrapper[28766]: I0318 09:05:11.272918 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bd9677648-tq84g"] Mar 18 09:05:11.337356 master-0 kubenswrapper[28766]: I0318 09:05:11.334560 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:11.337356 master-0 kubenswrapper[28766]: I0318 09:05:11.334607 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:05:11.337633 master-0 kubenswrapper[28766]: I0318 09:05:11.337539 28766 patch_prober.go:28] interesting pod/console-5d57b58fd4-tcq7b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Mar 18 09:05:11.337633 master-0 kubenswrapper[28766]: I0318 09:05:11.337586 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" Mar 18 09:05:11.339415 master-0 kubenswrapper[28766]: I0318 09:05:11.339210 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-trusted-ca-bundle\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.339415 master-0 kubenswrapper[28766]: I0318 09:05:11.339306 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-oauth-config\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.339415 master-0 kubenswrapper[28766]: I0318 09:05:11.339362 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8bqn\" (UniqueName: \"kubernetes.io/projected/e3d66c24-e87e-489f-8474-277b2add6768-kube-api-access-v8bqn\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.339415 master-0 kubenswrapper[28766]: I0318 09:05:11.339387 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-serving-cert\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.339582 master-0 kubenswrapper[28766]: I0318 09:05:11.339430 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-service-ca\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.339582 master-0 kubenswrapper[28766]: I0318 09:05:11.339487 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-oauth-serving-cert\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.339644 master-0 kubenswrapper[28766]: I0318 09:05:11.339601 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-console-config\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.440876 master-0 kubenswrapper[28766]: I0318 09:05:11.440795 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-console-config\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.441130 master-0 kubenswrapper[28766]: I0318 09:05:11.440972 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-trusted-ca-bundle\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.441130 master-0 kubenswrapper[28766]: I0318 09:05:11.440997 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-oauth-config\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.441130 master-0 kubenswrapper[28766]: I0318 09:05:11.441047 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-serving-cert\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.441130 master-0 kubenswrapper[28766]: I0318 09:05:11.441072 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8bqn\" (UniqueName: \"kubernetes.io/projected/e3d66c24-e87e-489f-8474-277b2add6768-kube-api-access-v8bqn\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.441130 master-0 kubenswrapper[28766]: I0318 09:05:11.441087 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-service-ca\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.441379 master-0 kubenswrapper[28766]: I0318 09:05:11.441137 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-oauth-serving-cert\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.445398 master-0 kubenswrapper[28766]: I0318 09:05:11.442466 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-oauth-serving-cert\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.445398 master-0 kubenswrapper[28766]: I0318 09:05:11.442876 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-trusted-ca-bundle\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.445398 master-0 kubenswrapper[28766]: I0318 09:05:11.443436 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-service-ca\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.449876 master-0 kubenswrapper[28766]: I0318 09:05:11.446477 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-console-config\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.450822 master-0 kubenswrapper[28766]: I0318 09:05:11.450792 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-serving-cert\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.455606 master-0 kubenswrapper[28766]: I0318 09:05:11.455531 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-oauth-config\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.463730 master-0 kubenswrapper[28766]: I0318 09:05:11.463573 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8bqn\" (UniqueName: \"kubernetes.io/projected/e3d66c24-e87e-489f-8474-277b2add6768-kube-api-access-v8bqn\") pod \"console-bd9677648-tq84g\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.549488 master-0 kubenswrapper[28766]: I0318 09:05:11.549434 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:11.974367 master-0 kubenswrapper[28766]: I0318 09:05:11.974272 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bd9677648-tq84g"] Mar 18 09:05:12.257577 master-0 kubenswrapper[28766]: I0318 09:05:12.257474 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd9677648-tq84g" event={"ID":"e3d66c24-e87e-489f-8474-277b2add6768","Type":"ContainerStarted","Data":"4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55"} Mar 18 09:05:12.257577 master-0 kubenswrapper[28766]: I0318 09:05:12.257560 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd9677648-tq84g" event={"ID":"e3d66c24-e87e-489f-8474-277b2add6768","Type":"ContainerStarted","Data":"7b41a8fe7360de01c7561668069c56aa5f4182c550f22c465ed5af9e52db53c5"} Mar 18 09:05:12.293678 master-0 kubenswrapper[28766]: I0318 09:05:12.289887 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bd9677648-tq84g" podStartSLOduration=1.289846395 podStartE2EDuration="1.289846395s" podCreationTimestamp="2026-03-18 09:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:12.287943204 +0000 UTC m=+65.302201890" watchObservedRunningTime="2026-03-18 09:05:12.289846395 +0000 UTC m=+65.304105061" Mar 18 09:05:17.698847 master-0 kubenswrapper[28766]: I0318 09:05:17.698777 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:17.700007 master-0 kubenswrapper[28766]: I0318 09:05:17.699884 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.704613 master-0 kubenswrapper[28766]: I0318 09:05:17.704549 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-6hjtj" Mar 18 09:05:17.705461 master-0 kubenswrapper[28766]: I0318 09:05:17.704648 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 09:05:17.710576 master-0 kubenswrapper[28766]: I0318 09:05:17.708550 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:17.847613 master-0 kubenswrapper[28766]: I0318 09:05:17.847542 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19f89ec6-7335-4ab9-bd42-47f35942a483-kube-api-access\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.847860 master-0 kubenswrapper[28766]: I0318 09:05:17.847631 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-var-lock\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.848022 master-0 kubenswrapper[28766]: I0318 09:05:17.847949 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.949975 master-0 kubenswrapper[28766]: I0318 09:05:17.949774 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19f89ec6-7335-4ab9-bd42-47f35942a483-kube-api-access\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.949975 master-0 kubenswrapper[28766]: I0318 09:05:17.949919 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-var-lock\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.950337 master-0 kubenswrapper[28766]: I0318 09:05:17.949986 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.950337 master-0 kubenswrapper[28766]: I0318 09:05:17.950161 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-var-lock\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.950337 master-0 kubenswrapper[28766]: I0318 09:05:17.950227 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:17.967015 master-0 kubenswrapper[28766]: I0318 09:05:17.966951 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19f89ec6-7335-4ab9-bd42-47f35942a483-kube-api-access\") pod \"installer-4-master-0\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:18.064153 master-0 kubenswrapper[28766]: I0318 09:05:18.064072 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:18.512532 master-0 kubenswrapper[28766]: I0318 09:05:18.512400 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:18.520238 master-0 kubenswrapper[28766]: W0318 09:05:18.520149 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod19f89ec6_7335_4ab9_bd42_47f35942a483.slice/crio-88831aa104c16c645382927f85a0779f82c98f3958023a92a39161bce65afefa WatchSource:0}: Error finding container 88831aa104c16c645382927f85a0779f82c98f3958023a92a39161bce65afefa: Status 404 returned error can't find the container with id 88831aa104c16c645382927f85a0779f82c98f3958023a92a39161bce65afefa Mar 18 09:05:19.314456 master-0 kubenswrapper[28766]: I0318 09:05:19.314385 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"19f89ec6-7335-4ab9-bd42-47f35942a483","Type":"ContainerStarted","Data":"86e27218674f2d6031641e0e523f2dc9ad836aca173532c7198ed1f6157cc8c0"} Mar 18 09:05:19.314456 master-0 kubenswrapper[28766]: I0318 09:05:19.314458 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"19f89ec6-7335-4ab9-bd42-47f35942a483","Type":"ContainerStarted","Data":"88831aa104c16c645382927f85a0779f82c98f3958023a92a39161bce65afefa"} Mar 18 09:05:19.333055 master-0 kubenswrapper[28766]: I0318 09:05:19.332956 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.332935073 podStartE2EDuration="2.332935073s" podCreationTimestamp="2026-03-18 09:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:19.331954796 +0000 UTC m=+72.346213462" watchObservedRunningTime="2026-03-18 09:05:19.332935073 +0000 UTC m=+72.347193739" Mar 18 09:05:21.334725 master-0 kubenswrapper[28766]: I0318 09:05:21.334653 28766 patch_prober.go:28] interesting pod/console-5d57b58fd4-tcq7b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Mar 18 09:05:21.335325 master-0 kubenswrapper[28766]: I0318 09:05:21.334746 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" Mar 18 09:05:21.550486 master-0 kubenswrapper[28766]: I0318 09:05:21.550408 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:21.550791 master-0 kubenswrapper[28766]: I0318 09:05:21.550504 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:05:21.552318 master-0 kubenswrapper[28766]: I0318 09:05:21.552280 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:05:21.552403 master-0 kubenswrapper[28766]: I0318 09:05:21.552322 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:05:27.546410 master-0 kubenswrapper[28766]: I0318 09:05:27.546333 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" podUID="49162cd5-4038-4e1b-bbd2-26fbdace96aa" containerName="oauth-openshift" containerID="cri-o://642f6822ad953d936ce5231469d83c2c8abd87f3dda405f474692b8e182b9839" gracePeriod=15 Mar 18 09:05:31.336186 master-0 kubenswrapper[28766]: I0318 09:05:31.336130 28766 patch_prober.go:28] interesting pod/console-5d57b58fd4-tcq7b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Mar 18 09:05:31.336995 master-0 kubenswrapper[28766]: I0318 09:05:31.336197 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" Mar 18 09:05:31.551467 master-0 kubenswrapper[28766]: I0318 09:05:31.551318 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:05:31.551467 master-0 kubenswrapper[28766]: I0318 09:05:31.551397 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:05:33.428850 master-0 kubenswrapper[28766]: I0318 09:05:33.428678 28766 patch_prober.go:28] interesting pod/oauth-openshift-6dd57c659d-b5n72 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.90:6443/healthz\": dial tcp 10.128.0.90:6443: connect: connection refused" start-of-body= Mar 18 09:05:33.428850 master-0 kubenswrapper[28766]: I0318 09:05:33.428777 28766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" podUID="49162cd5-4038-4e1b-bbd2-26fbdace96aa" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.90:6443/healthz\": dial tcp 10.128.0.90:6443: connect: connection refused" Mar 18 09:05:33.436220 master-0 kubenswrapper[28766]: I0318 09:05:33.436149 28766 generic.go:334] "Generic (PLEG): container finished" podID="49162cd5-4038-4e1b-bbd2-26fbdace96aa" containerID="642f6822ad953d936ce5231469d83c2c8abd87f3dda405f474692b8e182b9839" exitCode=0 Mar 18 09:05:33.436220 master-0 kubenswrapper[28766]: I0318 09:05:33.436211 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" event={"ID":"49162cd5-4038-4e1b-bbd2-26fbdace96aa","Type":"ContainerDied","Data":"642f6822ad953d936ce5231469d83c2c8abd87f3dda405f474692b8e182b9839"} Mar 18 09:05:33.862812 master-0 kubenswrapper[28766]: I0318 09:05:33.862757 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:05:33.953452 master-0 kubenswrapper[28766]: I0318 09:05:33.953370 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.953452 master-0 kubenswrapper[28766]: I0318 09:05:33.953457 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-policies\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.953774 master-0 kubenswrapper[28766]: I0318 09:05:33.953479 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-service-ca\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.953774 master-0 kubenswrapper[28766]: I0318 09:05:33.953516 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-login\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954104 master-0 kubenswrapper[28766]: I0318 09:05:33.954057 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-provider-selection\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954180 master-0 kubenswrapper[28766]: I0318 09:05:33.954123 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-trusted-ca-bundle\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954180 master-0 kubenswrapper[28766]: I0318 09:05:33.954162 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w44lb\" (UniqueName: \"kubernetes.io/projected/49162cd5-4038-4e1b-bbd2-26fbdace96aa-kube-api-access-w44lb\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954272 master-0 kubenswrapper[28766]: I0318 09:05:33.954203 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-error\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954272 master-0 kubenswrapper[28766]: I0318 09:05:33.954255 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-router-certs\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954376 master-0 kubenswrapper[28766]: I0318 09:05:33.954321 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-session\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954376 master-0 kubenswrapper[28766]: I0318 09:05:33.954364 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-dir\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954436 master-0 kubenswrapper[28766]: I0318 09:05:33.954395 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-serving-cert\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.954483 master-0 kubenswrapper[28766]: I0318 09:05:33.954397 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:33.954589 master-0 kubenswrapper[28766]: I0318 09:05:33.954448 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:33.954589 master-0 kubenswrapper[28766]: I0318 09:05:33.954499 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:33.954589 master-0 kubenswrapper[28766]: I0318 09:05:33.954432 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-ocp-branding-template\") pod \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\" (UID: \"49162cd5-4038-4e1b-bbd2-26fbdace96aa\") " Mar 18 09:05:33.955816 master-0 kubenswrapper[28766]: I0318 09:05:33.954556 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:33.955899 master-0 kubenswrapper[28766]: I0318 09:05:33.955678 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:33.955899 master-0 kubenswrapper[28766]: I0318 09:05:33.955810 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:33.955968 master-0 kubenswrapper[28766]: I0318 09:05:33.955904 28766 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:33.955968 master-0 kubenswrapper[28766]: I0318 09:05:33.955925 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:33.957685 master-0 kubenswrapper[28766]: I0318 09:05:33.957643 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:33.958192 master-0 kubenswrapper[28766]: I0318 09:05:33.958147 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:33.958421 master-0 kubenswrapper[28766]: I0318 09:05:33.958371 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:33.958774 master-0 kubenswrapper[28766]: I0318 09:05:33.958725 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:33.958989 master-0 kubenswrapper[28766]: I0318 09:05:33.958919 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:33.959097 master-0 kubenswrapper[28766]: I0318 09:05:33.959067 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:33.959839 master-0 kubenswrapper[28766]: I0318 09:05:33.959779 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:33.961131 master-0 kubenswrapper[28766]: I0318 09:05:33.961062 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49162cd5-4038-4e1b-bbd2-26fbdace96aa-kube-api-access-w44lb" (OuterVolumeSpecName: "kube-api-access-w44lb") pod "49162cd5-4038-4e1b-bbd2-26fbdace96aa" (UID: "49162cd5-4038-4e1b-bbd2-26fbdace96aa"). InnerVolumeSpecName "kube-api-access-w44lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:05:34.058208 master-0 kubenswrapper[28766]: I0318 09:05:34.058143 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058208 master-0 kubenswrapper[28766]: I0318 09:05:34.058202 28766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49162cd5-4038-4e1b-bbd2-26fbdace96aa-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058208 master-0 kubenswrapper[28766]: I0318 09:05:34.058227 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058734 master-0 kubenswrapper[28766]: I0318 09:05:34.058248 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058734 master-0 kubenswrapper[28766]: I0318 09:05:34.058273 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058734 master-0 kubenswrapper[28766]: I0318 09:05:34.058293 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058734 master-0 kubenswrapper[28766]: I0318 09:05:34.058314 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058734 master-0 kubenswrapper[28766]: I0318 09:05:34.058334 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w44lb\" (UniqueName: \"kubernetes.io/projected/49162cd5-4038-4e1b-bbd2-26fbdace96aa-kube-api-access-w44lb\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058734 master-0 kubenswrapper[28766]: I0318 09:05:34.058353 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.058734 master-0 kubenswrapper[28766]: I0318 09:05:34.058372 28766 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49162cd5-4038-4e1b-bbd2-26fbdace96aa-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:34.347907 master-0 kubenswrapper[28766]: I0318 09:05:34.347786 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79657f7847-bxc9l"] Mar 18 09:05:34.348345 master-0 kubenswrapper[28766]: E0318 09:05:34.348293 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49162cd5-4038-4e1b-bbd2-26fbdace96aa" containerName="oauth-openshift" Mar 18 09:05:34.348345 master-0 kubenswrapper[28766]: I0318 09:05:34.348323 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="49162cd5-4038-4e1b-bbd2-26fbdace96aa" containerName="oauth-openshift" Mar 18 09:05:34.348531 master-0 kubenswrapper[28766]: I0318 09:05:34.348501 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="49162cd5-4038-4e1b-bbd2-26fbdace96aa" containerName="oauth-openshift" Mar 18 09:05:34.349197 master-0 kubenswrapper[28766]: I0318 09:05:34.349147 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.451726 master-0 kubenswrapper[28766]: I0318 09:05:34.451658 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" event={"ID":"49162cd5-4038-4e1b-bbd2-26fbdace96aa","Type":"ContainerDied","Data":"c8332aa368fd736bbc1e61a8c1d6ac3346d4454be3104b128284b622d9a88886"} Mar 18 09:05:34.451726 master-0 kubenswrapper[28766]: I0318 09:05:34.451731 28766 scope.go:117] "RemoveContainer" containerID="642f6822ad953d936ce5231469d83c2c8abd87f3dda405f474692b8e182b9839" Mar 18 09:05:34.453075 master-0 kubenswrapper[28766]: I0318 09:05:34.453011 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dd57c659d-b5n72" Mar 18 09:05:34.465112 master-0 kubenswrapper[28766]: I0318 09:05:34.465071 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-error\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465234 master-0 kubenswrapper[28766]: I0318 09:05:34.465116 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465234 master-0 kubenswrapper[28766]: I0318 09:05:34.465142 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8x6p\" (UniqueName: \"kubernetes.io/projected/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-kube-api-access-p8x6p\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465234 master-0 kubenswrapper[28766]: I0318 09:05:34.465170 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-audit-dir\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465234 master-0 kubenswrapper[28766]: I0318 09:05:34.465210 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-session\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465234 master-0 kubenswrapper[28766]: I0318 09:05:34.465232 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465465 master-0 kubenswrapper[28766]: I0318 09:05:34.465338 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-login\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465465 master-0 kubenswrapper[28766]: I0318 09:05:34.465384 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465465 master-0 kubenswrapper[28766]: I0318 09:05:34.465452 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465593 master-0 kubenswrapper[28766]: I0318 09:05:34.465475 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465593 master-0 kubenswrapper[28766]: I0318 09:05:34.465526 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465774 master-0 kubenswrapper[28766]: I0318 09:05:34.465615 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.465774 master-0 kubenswrapper[28766]: I0318 09:05:34.465698 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-audit-policies\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.566945 master-0 kubenswrapper[28766]: I0318 09:05:34.566877 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.567323 master-0 kubenswrapper[28766]: I0318 09:05:34.567306 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.567439 master-0 kubenswrapper[28766]: I0318 09:05:34.567426 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.567630 master-0 kubenswrapper[28766]: I0318 09:05:34.567615 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.567921 master-0 kubenswrapper[28766]: I0318 09:05:34.567847 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-audit-policies\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.568204 master-0 kubenswrapper[28766]: I0318 09:05:34.568169 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-error\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.568452 master-0 kubenswrapper[28766]: I0318 09:05:34.568424 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.568578 master-0 kubenswrapper[28766]: I0318 09:05:34.568557 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.568683 master-0 kubenswrapper[28766]: I0318 09:05:34.568633 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.568798 master-0 kubenswrapper[28766]: I0318 09:05:34.568594 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8x6p\" (UniqueName: \"kubernetes.io/projected/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-kube-api-access-p8x6p\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.568898 master-0 kubenswrapper[28766]: I0318 09:05:34.568790 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-audit-dir\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.568981 master-0 kubenswrapper[28766]: I0318 09:05:34.568930 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-session\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.569049 master-0 kubenswrapper[28766]: I0318 09:05:34.568997 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.569049 master-0 kubenswrapper[28766]: I0318 09:05:34.569005 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-audit-dir\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.569193 master-0 kubenswrapper[28766]: I0318 09:05:34.569062 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-login\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.569193 master-0 kubenswrapper[28766]: I0318 09:05:34.569120 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.569503 master-0 kubenswrapper[28766]: I0318 09:05:34.569337 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-audit-policies\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.571211 master-0 kubenswrapper[28766]: I0318 09:05:34.571127 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.589352 master-0 kubenswrapper[28766]: I0318 09:05:34.583354 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.589352 master-0 kubenswrapper[28766]: I0318 09:05:34.589192 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-login\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.589352 master-0 kubenswrapper[28766]: I0318 09:05:34.589341 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-session\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.590365 master-0 kubenswrapper[28766]: I0318 09:05:34.590334 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.592933 master-0 kubenswrapper[28766]: I0318 09:05:34.592817 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-user-template-error\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.629475 master-0 kubenswrapper[28766]: I0318 09:05:34.629335 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.633450 master-0 kubenswrapper[28766]: I0318 09:05:34.631557 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79657f7847-bxc9l"] Mar 18 09:05:34.633450 master-0 kubenswrapper[28766]: I0318 09:05:34.584892 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.651005 master-0 kubenswrapper[28766]: I0318 09:05:34.647293 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8x6p\" (UniqueName: \"kubernetes.io/projected/a3e7d74a-e02d-419b-b85a-ee0304f06ad4-kube-api-access-p8x6p\") pod \"oauth-openshift-79657f7847-bxc9l\" (UID: \"a3e7d74a-e02d-419b-b85a-ee0304f06ad4\") " pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:34.691020 master-0 kubenswrapper[28766]: I0318 09:05:34.690941 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:35.245635 master-0 kubenswrapper[28766]: I0318 09:05:35.245550 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6dd57c659d-b5n72"] Mar 18 09:05:35.468967 master-0 kubenswrapper[28766]: I0318 09:05:35.467970 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6dd57c659d-b5n72"] Mar 18 09:05:36.207576 master-0 kubenswrapper[28766]: I0318 09:05:36.207491 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:36.208042 master-0 kubenswrapper[28766]: I0318 09:05:36.207810 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="19f89ec6-7335-4ab9-bd42-47f35942a483" containerName="installer" containerID="cri-o://86e27218674f2d6031641e0e523f2dc9ad836aca173532c7198ed1f6157cc8c0" gracePeriod=30 Mar 18 09:05:36.487276 master-0 kubenswrapper[28766]: I0318 09:05:36.487192 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-mjnxk" event={"ID":"0aeda1f0-6438-4d96-becd-e0cd833e99d5","Type":"ContainerStarted","Data":"90633a64280260dca6f68b9d5782ab2493725233d8db6c57f2c6b8ca8ddff94f"} Mar 18 09:05:36.897636 master-0 kubenswrapper[28766]: I0318 09:05:36.895751 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79657f7847-bxc9l"] Mar 18 09:05:36.901226 master-0 kubenswrapper[28766]: W0318 09:05:36.901113 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3e7d74a_e02d_419b_b85a_ee0304f06ad4.slice/crio-99ac85c86c32fd85637fe3431172845e9d0a0a1949e5b1f13e523f28b78c8cee WatchSource:0}: Error finding container 99ac85c86c32fd85637fe3431172845e9d0a0a1949e5b1f13e523f28b78c8cee: Status 404 returned error can't find the container with id 99ac85c86c32fd85637fe3431172845e9d0a0a1949e5b1f13e523f28b78c8cee Mar 18 09:05:37.251924 master-0 kubenswrapper[28766]: I0318 09:05:37.251816 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49162cd5-4038-4e1b-bbd2-26fbdace96aa" path="/var/lib/kubelet/pods/49162cd5-4038-4e1b-bbd2-26fbdace96aa/volumes" Mar 18 09:05:37.496038 master-0 kubenswrapper[28766]: I0318 09:05:37.495912 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" event={"ID":"a3e7d74a-e02d-419b-b85a-ee0304f06ad4","Type":"ContainerStarted","Data":"99ac85c86c32fd85637fe3431172845e9d0a0a1949e5b1f13e523f28b78c8cee"} Mar 18 09:05:37.497113 master-0 kubenswrapper[28766]: I0318 09:05:37.496276 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-mjnxk" Mar 18 09:05:37.498513 master-0 kubenswrapper[28766]: I0318 09:05:37.498456 28766 patch_prober.go:28] interesting pod/downloads-66b8ffb895-mjnxk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.92:8080/\": dial tcp 10.128.0.92:8080: connect: connection refused" start-of-body= Mar 18 09:05:37.498628 master-0 kubenswrapper[28766]: I0318 09:05:37.498521 28766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-mjnxk" podUID="0aeda1f0-6438-4d96-becd-e0cd833e99d5" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.92:8080/\": dial tcp 10.128.0.92:8080: connect: connection refused" Mar 18 09:05:37.890542 master-0 kubenswrapper[28766]: I0318 09:05:37.890336 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-mjnxk" podStartSLOduration=3.3881497879999998 podStartE2EDuration="42.890295134s" podCreationTimestamp="2026-03-18 09:04:55 +0000 UTC" firstStartedPulling="2026-03-18 09:04:56.139263149 +0000 UTC m=+49.153521815" lastFinishedPulling="2026-03-18 09:05:35.641408495 +0000 UTC m=+88.655667161" observedRunningTime="2026-03-18 09:05:37.887778996 +0000 UTC m=+90.902037702" watchObservedRunningTime="2026-03-18 09:05:37.890295134 +0000 UTC m=+90.904553790" Mar 18 09:05:38.506442 master-0 kubenswrapper[28766]: I0318 09:05:38.506343 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" event={"ID":"a3e7d74a-e02d-419b-b85a-ee0304f06ad4","Type":"ContainerStarted","Data":"c9e2f500273422f79c620aa09aca742ac919d825eb5a0d144261c9384c717e9a"} Mar 18 09:05:38.507188 master-0 kubenswrapper[28766]: I0318 09:05:38.506832 28766 patch_prober.go:28] interesting pod/downloads-66b8ffb895-mjnxk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.92:8080/\": dial tcp 10.128.0.92:8080: connect: connection refused" start-of-body= Mar 18 09:05:38.507188 master-0 kubenswrapper[28766]: I0318 09:05:38.506972 28766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-mjnxk" podUID="0aeda1f0-6438-4d96-becd-e0cd833e99d5" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.92:8080/\": dial tcp 10.128.0.92:8080: connect: connection refused" Mar 18 09:05:38.507188 master-0 kubenswrapper[28766]: I0318 09:05:38.507117 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:38.512463 master-0 kubenswrapper[28766]: I0318 09:05:38.512415 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" Mar 18 09:05:38.531196 master-0 kubenswrapper[28766]: I0318 09:05:38.531077 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79657f7847-bxc9l" podStartSLOduration=16.531052213 podStartE2EDuration="16.531052213s" podCreationTimestamp="2026-03-18 09:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:38.528252027 +0000 UTC m=+91.542510693" watchObservedRunningTime="2026-03-18 09:05:38.531052213 +0000 UTC m=+91.545310899" Mar 18 09:05:39.490795 master-0 kubenswrapper[28766]: I0318 09:05:39.490737 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:05:39.491610 master-0 kubenswrapper[28766]: I0318 09:05:39.491592 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.511960 master-0 kubenswrapper[28766]: I0318 09:05:39.511911 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:05:39.566032 master-0 kubenswrapper[28766]: I0318 09:05:39.565971 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kube-api-access\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.566278 master-0 kubenswrapper[28766]: I0318 09:05:39.566256 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.566419 master-0 kubenswrapper[28766]: I0318 09:05:39.566373 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-var-lock\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.668182 master-0 kubenswrapper[28766]: I0318 09:05:39.668100 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kube-api-access\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.668459 master-0 kubenswrapper[28766]: I0318 09:05:39.668217 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.668459 master-0 kubenswrapper[28766]: I0318 09:05:39.668278 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-var-lock\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.668541 master-0 kubenswrapper[28766]: I0318 09:05:39.668475 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-var-lock\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.669076 master-0 kubenswrapper[28766]: I0318 09:05:39.669034 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.693564 master-0 kubenswrapper[28766]: I0318 09:05:39.693499 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kube-api-access\") pod \"installer-5-master-0\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:39.826557 master-0 kubenswrapper[28766]: I0318 09:05:39.826393 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:05:41.153082 master-0 kubenswrapper[28766]: I0318 09:05:41.152797 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:05:41.160628 master-0 kubenswrapper[28766]: W0318 09:05:41.160536 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb5d596ea_c73d_4619_b3a5_fd52d3bebedd.slice/crio-e73afc93fb9b0bddb39adc1514581fbef6f1b62a1f557d618c36e67b1eb65a42 WatchSource:0}: Error finding container e73afc93fb9b0bddb39adc1514581fbef6f1b62a1f557d618c36e67b1eb65a42: Status 404 returned error can't find the container with id e73afc93fb9b0bddb39adc1514581fbef6f1b62a1f557d618c36e67b1eb65a42 Mar 18 09:05:41.334700 master-0 kubenswrapper[28766]: I0318 09:05:41.334615 28766 patch_prober.go:28] interesting pod/console-5d57b58fd4-tcq7b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Mar 18 09:05:41.335045 master-0 kubenswrapper[28766]: I0318 09:05:41.334700 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" Mar 18 09:05:41.550777 master-0 kubenswrapper[28766]: I0318 09:05:41.550677 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:05:41.551205 master-0 kubenswrapper[28766]: I0318 09:05:41.550793 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:05:41.561482 master-0 kubenswrapper[28766]: I0318 09:05:41.561407 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"b5d596ea-c73d-4619-b3a5-fd52d3bebedd","Type":"ContainerStarted","Data":"e73afc93fb9b0bddb39adc1514581fbef6f1b62a1f557d618c36e67b1eb65a42"} Mar 18 09:05:42.571298 master-0 kubenswrapper[28766]: I0318 09:05:42.571226 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"b5d596ea-c73d-4619-b3a5-fd52d3bebedd","Type":"ContainerStarted","Data":"524a8cb4f79e426f6f698c7428a6ba7258d080a1b3b794a6c76b004e1c1dad11"} Mar 18 09:05:42.611973 master-0 kubenswrapper[28766]: I0318 09:05:42.611846 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=3.611816593 podStartE2EDuration="3.611816593s" podCreationTimestamp="2026-03-18 09:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:42.60690667 +0000 UTC m=+95.621165336" watchObservedRunningTime="2026-03-18 09:05:42.611816593 +0000 UTC m=+95.626075259" Mar 18 09:05:45.695084 master-0 kubenswrapper[28766]: I0318 09:05:45.694995 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-mjnxk" Mar 18 09:05:50.638883 master-0 kubenswrapper[28766]: I0318 09:05:50.638823 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_19f89ec6-7335-4ab9-bd42-47f35942a483/installer/0.log" Mar 18 09:05:50.639609 master-0 kubenswrapper[28766]: I0318 09:05:50.638899 28766 generic.go:334] "Generic (PLEG): container finished" podID="19f89ec6-7335-4ab9-bd42-47f35942a483" containerID="86e27218674f2d6031641e0e523f2dc9ad836aca173532c7198ed1f6157cc8c0" exitCode=1 Mar 18 09:05:50.639609 master-0 kubenswrapper[28766]: I0318 09:05:50.638951 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"19f89ec6-7335-4ab9-bd42-47f35942a483","Type":"ContainerDied","Data":"86e27218674f2d6031641e0e523f2dc9ad836aca173532c7198ed1f6157cc8c0"} Mar 18 09:05:51.202043 master-0 kubenswrapper[28766]: I0318 09:05:51.201981 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_19f89ec6-7335-4ab9-bd42-47f35942a483/installer/0.log" Mar 18 09:05:51.202336 master-0 kubenswrapper[28766]: I0318 09:05:51.202065 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:51.281998 master-0 kubenswrapper[28766]: I0318 09:05:51.281927 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-var-lock\") pod \"19f89ec6-7335-4ab9-bd42-47f35942a483\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " Mar 18 09:05:51.282272 master-0 kubenswrapper[28766]: I0318 09:05:51.282071 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-kubelet-dir\") pod \"19f89ec6-7335-4ab9-bd42-47f35942a483\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " Mar 18 09:05:51.282272 master-0 kubenswrapper[28766]: I0318 09:05:51.282105 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-var-lock" (OuterVolumeSpecName: "var-lock") pod "19f89ec6-7335-4ab9-bd42-47f35942a483" (UID: "19f89ec6-7335-4ab9-bd42-47f35942a483"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:51.282272 master-0 kubenswrapper[28766]: I0318 09:05:51.282162 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19f89ec6-7335-4ab9-bd42-47f35942a483-kube-api-access\") pod \"19f89ec6-7335-4ab9-bd42-47f35942a483\" (UID: \"19f89ec6-7335-4ab9-bd42-47f35942a483\") " Mar 18 09:05:51.282272 master-0 kubenswrapper[28766]: I0318 09:05:51.282218 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "19f89ec6-7335-4ab9-bd42-47f35942a483" (UID: "19f89ec6-7335-4ab9-bd42-47f35942a483"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:51.282548 master-0 kubenswrapper[28766]: I0318 09:05:51.282506 28766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:51.282548 master-0 kubenswrapper[28766]: I0318 09:05:51.282547 28766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/19f89ec6-7335-4ab9-bd42-47f35942a483-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:51.285647 master-0 kubenswrapper[28766]: I0318 09:05:51.285607 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f89ec6-7335-4ab9-bd42-47f35942a483-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "19f89ec6-7335-4ab9-bd42-47f35942a483" (UID: "19f89ec6-7335-4ab9-bd42-47f35942a483"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:05:51.334909 master-0 kubenswrapper[28766]: I0318 09:05:51.334769 28766 patch_prober.go:28] interesting pod/console-5d57b58fd4-tcq7b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Mar 18 09:05:51.334909 master-0 kubenswrapper[28766]: I0318 09:05:51.334842 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" Mar 18 09:05:51.385121 master-0 kubenswrapper[28766]: I0318 09:05:51.384822 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19f89ec6-7335-4ab9-bd42-47f35942a483-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:51.551261 master-0 kubenswrapper[28766]: I0318 09:05:51.551125 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:05:51.551261 master-0 kubenswrapper[28766]: I0318 09:05:51.551225 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:05:51.650165 master-0 kubenswrapper[28766]: I0318 09:05:51.650102 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_19f89ec6-7335-4ab9-bd42-47f35942a483/installer/0.log" Mar 18 09:05:51.651045 master-0 kubenswrapper[28766]: I0318 09:05:51.650184 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"19f89ec6-7335-4ab9-bd42-47f35942a483","Type":"ContainerDied","Data":"88831aa104c16c645382927f85a0779f82c98f3958023a92a39161bce65afefa"} Mar 18 09:05:51.651045 master-0 kubenswrapper[28766]: I0318 09:05:51.650234 28766 scope.go:117] "RemoveContainer" containerID="86e27218674f2d6031641e0e523f2dc9ad836aca173532c7198ed1f6157cc8c0" Mar 18 09:05:51.651476 master-0 kubenswrapper[28766]: I0318 09:05:51.651351 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:52.452313 master-0 kubenswrapper[28766]: I0318 09:05:52.452208 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:52.473502 master-0 kubenswrapper[28766]: I0318 09:05:52.473423 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:53.252126 master-0 kubenswrapper[28766]: I0318 09:05:53.252022 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f89ec6-7335-4ab9-bd42-47f35942a483" path="/var/lib/kubelet/pods/19f89ec6-7335-4ab9-bd42-47f35942a483/volumes" Mar 18 09:06:01.335027 master-0 kubenswrapper[28766]: I0318 09:06:01.334832 28766 patch_prober.go:28] interesting pod/console-5d57b58fd4-tcq7b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Mar 18 09:06:01.336479 master-0 kubenswrapper[28766]: I0318 09:06:01.335066 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" Mar 18 09:06:01.550641 master-0 kubenswrapper[28766]: I0318 09:06:01.550541 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:01.551252 master-0 kubenswrapper[28766]: I0318 09:06:01.550645 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:11.335431 master-0 kubenswrapper[28766]: I0318 09:06:11.335331 28766 patch_prober.go:28] interesting pod/console-5d57b58fd4-tcq7b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Mar 18 09:06:11.336536 master-0 kubenswrapper[28766]: I0318 09:06:11.335437 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" Mar 18 09:06:11.551238 master-0 kubenswrapper[28766]: I0318 09:06:11.551161 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:11.551522 master-0 kubenswrapper[28766]: I0318 09:06:11.551293 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:21.335306 master-0 kubenswrapper[28766]: I0318 09:06:21.335184 28766 patch_prober.go:28] interesting pod/console-5d57b58fd4-tcq7b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Mar 18 09:06:21.336689 master-0 kubenswrapper[28766]: I0318 09:06:21.335301 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.95:8443/health\": dial tcp 10.128.0.95:8443: connect: connection refused" Mar 18 09:06:21.551606 master-0 kubenswrapper[28766]: I0318 09:06:21.551510 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:21.551942 master-0 kubenswrapper[28766]: I0318 09:06:21.551612 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:25.560648 master-0 kubenswrapper[28766]: I0318 09:06:25.560549 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d57b58fd4-tcq7b"] Mar 18 09:06:25.604599 master-0 kubenswrapper[28766]: I0318 09:06:25.604519 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5644577ff9-fncm4"] Mar 18 09:06:25.605137 master-0 kubenswrapper[28766]: E0318 09:06:25.605051 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f89ec6-7335-4ab9-bd42-47f35942a483" containerName="installer" Mar 18 09:06:25.605137 master-0 kubenswrapper[28766]: I0318 09:06:25.605088 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f89ec6-7335-4ab9-bd42-47f35942a483" containerName="installer" Mar 18 09:06:25.605389 master-0 kubenswrapper[28766]: I0318 09:06:25.605350 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="19f89ec6-7335-4ab9-bd42-47f35942a483" containerName="installer" Mar 18 09:06:25.606181 master-0 kubenswrapper[28766]: I0318 09:06:25.606150 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.625638 master-0 kubenswrapper[28766]: I0318 09:06:25.625077 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5644577ff9-fncm4"] Mar 18 09:06:25.777776 master-0 kubenswrapper[28766]: I0318 09:06:25.777667 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-service-ca\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.778101 master-0 kubenswrapper[28766]: I0318 09:06:25.778004 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-serving-cert\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.778101 master-0 kubenswrapper[28766]: I0318 09:06:25.778073 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-trusted-ca-bundle\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.778249 master-0 kubenswrapper[28766]: I0318 09:06:25.778213 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-oauth-serving-cert\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.778316 master-0 kubenswrapper[28766]: I0318 09:06:25.778275 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tnwj\" (UniqueName: \"kubernetes.io/projected/adbe8207-26d0-4d0e-aacc-5f321184b53c-kube-api-access-5tnwj\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.778316 master-0 kubenswrapper[28766]: I0318 09:06:25.778307 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-config\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.778376 master-0 kubenswrapper[28766]: I0318 09:06:25.778330 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-oauth-config\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.880506 master-0 kubenswrapper[28766]: I0318 09:06:25.880339 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-serving-cert\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.880506 master-0 kubenswrapper[28766]: I0318 09:06:25.880413 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-trusted-ca-bundle\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.880808 master-0 kubenswrapper[28766]: I0318 09:06:25.880601 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-oauth-serving-cert\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.880808 master-0 kubenswrapper[28766]: I0318 09:06:25.880686 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tnwj\" (UniqueName: \"kubernetes.io/projected/adbe8207-26d0-4d0e-aacc-5f321184b53c-kube-api-access-5tnwj\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.880808 master-0 kubenswrapper[28766]: I0318 09:06:25.880722 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-config\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.880808 master-0 kubenswrapper[28766]: I0318 09:06:25.880754 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-oauth-config\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.880972 master-0 kubenswrapper[28766]: I0318 09:06:25.880912 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-service-ca\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.882979 master-0 kubenswrapper[28766]: I0318 09:06:25.882900 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-service-ca\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.883133 master-0 kubenswrapper[28766]: I0318 09:06:25.883092 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-trusted-ca-bundle\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.883764 master-0 kubenswrapper[28766]: I0318 09:06:25.883252 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-config\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.884843 master-0 kubenswrapper[28766]: I0318 09:06:25.884782 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-oauth-serving-cert\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.885474 master-0 kubenswrapper[28766]: I0318 09:06:25.885438 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-oauth-config\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.885822 master-0 kubenswrapper[28766]: I0318 09:06:25.885788 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-serving-cert\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.904587 master-0 kubenswrapper[28766]: I0318 09:06:25.904525 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tnwj\" (UniqueName: \"kubernetes.io/projected/adbe8207-26d0-4d0e-aacc-5f321184b53c-kube-api-access-5tnwj\") pod \"console-5644577ff9-fncm4\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:25.989553 master-0 kubenswrapper[28766]: I0318 09:06:25.989488 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:26.438964 master-0 kubenswrapper[28766]: I0318 09:06:26.438904 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5644577ff9-fncm4"] Mar 18 09:06:26.445828 master-0 kubenswrapper[28766]: W0318 09:06:26.445771 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadbe8207_26d0_4d0e_aacc_5f321184b53c.slice/crio-407299c71bfa0c2dff8fce0278ae24c5100c1a00f719164511f4e8e190eaf411 WatchSource:0}: Error finding container 407299c71bfa0c2dff8fce0278ae24c5100c1a00f719164511f4e8e190eaf411: Status 404 returned error can't find the container with id 407299c71bfa0c2dff8fce0278ae24c5100c1a00f719164511f4e8e190eaf411 Mar 18 09:06:27.020109 master-0 kubenswrapper[28766]: I0318 09:06:27.020026 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5644577ff9-fncm4" event={"ID":"adbe8207-26d0-4d0e-aacc-5f321184b53c","Type":"ContainerStarted","Data":"2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f"} Mar 18 09:06:27.020109 master-0 kubenswrapper[28766]: I0318 09:06:27.020116 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5644577ff9-fncm4" event={"ID":"adbe8207-26d0-4d0e-aacc-5f321184b53c","Type":"ContainerStarted","Data":"407299c71bfa0c2dff8fce0278ae24c5100c1a00f719164511f4e8e190eaf411"} Mar 18 09:06:27.047273 master-0 kubenswrapper[28766]: I0318 09:06:27.047107 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5644577ff9-fncm4" podStartSLOduration=2.047072494 podStartE2EDuration="2.047072494s" podCreationTimestamp="2026-03-18 09:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:06:27.046593392 +0000 UTC m=+140.060852098" watchObservedRunningTime="2026-03-18 09:06:27.047072494 +0000 UTC m=+140.061331200" Mar 18 09:06:29.708888 master-0 kubenswrapper[28766]: I0318 09:06:29.708760 28766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:06:29.710039 master-0 kubenswrapper[28766]: I0318 09:06:29.709988 28766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:06:29.710382 master-0 kubenswrapper[28766]: I0318 09:06:29.710298 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" containerID="cri-o://d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188" gracePeriod=15 Mar 18 09:06:29.710609 master-0 kubenswrapper[28766]: I0318 09:06:29.710373 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.710609 master-0 kubenswrapper[28766]: I0318 09:06:29.710410 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c" gracePeriod=15 Mar 18 09:06:29.710744 master-0 kubenswrapper[28766]: I0318 09:06:29.710472 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89" gracePeriod=15 Mar 18 09:06:29.710744 master-0 kubenswrapper[28766]: I0318 09:06:29.710489 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" containerID="cri-o://df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0" gracePeriod=15 Mar 18 09:06:29.710744 master-0 kubenswrapper[28766]: I0318 09:06:29.710524 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b" gracePeriod=15 Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: I0318 09:06:29.714990 28766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: E0318 09:06:29.715425 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: I0318 09:06:29.715447 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: E0318 09:06:29.715473 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: I0318 09:06:29.715486 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: E0318 09:06:29.715523 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: I0318 09:06:29.715541 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: E0318 09:06:29.715579 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: I0318 09:06:29.715591 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: E0318 09:06:29.715623 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: I0318 09:06:29.715635 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: E0318 09:06:29.715657 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 09:06:29.715765 master-0 kubenswrapper[28766]: I0318 09:06:29.715670 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 09:06:29.717044 master-0 kubenswrapper[28766]: I0318 09:06:29.716012 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:06:29.717044 master-0 kubenswrapper[28766]: I0318 09:06:29.716104 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:29.717044 master-0 kubenswrapper[28766]: I0318 09:06:29.716172 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 09:06:29.717044 master-0 kubenswrapper[28766]: I0318 09:06:29.716189 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 09:06:29.717044 master-0 kubenswrapper[28766]: I0318 09:06:29.716204 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 09:06:29.717044 master-0 kubenswrapper[28766]: E0318 09:06:29.716412 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:29.717044 master-0 kubenswrapper[28766]: I0318 09:06:29.716428 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:29.717044 master-0 kubenswrapper[28766]: I0318 09:06:29.716742 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:29.832558 master-0 kubenswrapper[28766]: E0318 09:06:29.827936 28766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.856394 master-0 kubenswrapper[28766]: I0318 09:06:29.856356 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.856484 master-0 kubenswrapper[28766]: I0318 09:06:29.856410 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.856484 master-0 kubenswrapper[28766]: I0318 09:06:29.856446 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.856484 master-0 kubenswrapper[28766]: I0318 09:06:29.856465 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.856618 master-0 kubenswrapper[28766]: I0318 09:06:29.856499 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.856618 master-0 kubenswrapper[28766]: I0318 09:06:29.856550 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.856618 master-0 kubenswrapper[28766]: I0318 09:06:29.856572 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.856806 master-0 kubenswrapper[28766]: I0318 09:06:29.856769 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.958949 master-0 kubenswrapper[28766]: I0318 09:06:29.958771 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.958949 master-0 kubenswrapper[28766]: I0318 09:06:29.958846 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.958949 master-0 kubenswrapper[28766]: I0318 09:06:29.958940 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.959170 master-0 kubenswrapper[28766]: I0318 09:06:29.958950 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.959170 master-0 kubenswrapper[28766]: I0318 09:06:29.959076 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.959170 master-0 kubenswrapper[28766]: I0318 09:06:29.959117 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.959369 master-0 kubenswrapper[28766]: I0318 09:06:29.959181 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.959369 master-0 kubenswrapper[28766]: I0318 09:06:29.959209 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.959369 master-0 kubenswrapper[28766]: I0318 09:06:29.959235 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.959369 master-0 kubenswrapper[28766]: I0318 09:06:29.959290 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.959369 master-0 kubenswrapper[28766]: I0318 09:06:29.959295 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.959369 master-0 kubenswrapper[28766]: I0318 09:06:29.959316 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.959369 master-0 kubenswrapper[28766]: I0318 09:06:29.959351 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.959599 master-0 kubenswrapper[28766]: I0318 09:06:29.959335 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:29.959599 master-0 kubenswrapper[28766]: I0318 09:06:29.959505 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:29.959770 master-0 kubenswrapper[28766]: I0318 09:06:29.959624 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:30.049226 master-0 kubenswrapper[28766]: I0318 09:06:30.049150 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 09:06:30.050706 master-0 kubenswrapper[28766]: I0318 09:06:30.050669 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 09:06:30.051645 master-0 kubenswrapper[28766]: I0318 09:06:30.051588 28766 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b" exitCode=0 Mar 18 09:06:30.051645 master-0 kubenswrapper[28766]: I0318 09:06:30.051617 28766 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c" exitCode=0 Mar 18 09:06:30.051645 master-0 kubenswrapper[28766]: I0318 09:06:30.051633 28766 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89" exitCode=0 Mar 18 09:06:30.051645 master-0 kubenswrapper[28766]: I0318 09:06:30.051643 28766 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0" exitCode=2 Mar 18 09:06:30.052023 master-0 kubenswrapper[28766]: I0318 09:06:30.051669 28766 scope.go:117] "RemoveContainer" containerID="9e39226f66d3647b6d3e60dfa41a65af602b2c0ac717809011f105e2b66ccbc2" Mar 18 09:06:30.054940 master-0 kubenswrapper[28766]: I0318 09:06:30.054886 28766 generic.go:334] "Generic (PLEG): container finished" podID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" containerID="524a8cb4f79e426f6f698c7428a6ba7258d080a1b3b794a6c76b004e1c1dad11" exitCode=0 Mar 18 09:06:30.054940 master-0 kubenswrapper[28766]: I0318 09:06:30.054935 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"b5d596ea-c73d-4619-b3a5-fd52d3bebedd","Type":"ContainerDied","Data":"524a8cb4f79e426f6f698c7428a6ba7258d080a1b3b794a6c76b004e1c1dad11"} Mar 18 09:06:30.057009 master-0 kubenswrapper[28766]: I0318 09:06:30.056823 28766 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:30.057993 master-0 kubenswrapper[28766]: I0318 09:06:30.057911 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:30.129358 master-0 kubenswrapper[28766]: I0318 09:06:30.129279 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:30.165130 master-0 kubenswrapper[28766]: W0318 09:06:30.165045 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85632c1cec8974aa874834e4cfff4c77.slice/crio-20c4463ddd86d19acd8f34cf4290e670fa5e387fc8b4dd2d838386987f9fbcf4 WatchSource:0}: Error finding container 20c4463ddd86d19acd8f34cf4290e670fa5e387fc8b4dd2d838386987f9fbcf4: Status 404 returned error can't find the container with id 20c4463ddd86d19acd8f34cf4290e670fa5e387fc8b4dd2d838386987f9fbcf4 Mar 18 09:06:30.172571 master-0 kubenswrapper[28766]: E0318 09:06:30.171534 28766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189de43f9fd25210 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:06:30.169276944 +0000 UTC m=+143.183535660,LastTimestamp:2026-03-18 09:06:30.169276944 +0000 UTC m=+143.183535660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:06:30.801086 master-0 kubenswrapper[28766]: I0318 09:06:30.800952 28766 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Mar 18 09:06:30.801086 master-0 kubenswrapper[28766]: I0318 09:06:30.801028 28766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:31.070299 master-0 kubenswrapper[28766]: I0318 09:06:31.070054 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"85632c1cec8974aa874834e4cfff4c77","Type":"ContainerStarted","Data":"1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb"} Mar 18 09:06:31.070299 master-0 kubenswrapper[28766]: I0318 09:06:31.070149 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"85632c1cec8974aa874834e4cfff4c77","Type":"ContainerStarted","Data":"20c4463ddd86d19acd8f34cf4290e670fa5e387fc8b4dd2d838386987f9fbcf4"} Mar 18 09:06:31.071818 master-0 kubenswrapper[28766]: E0318 09:06:31.071753 28766 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:31.072076 master-0 kubenswrapper[28766]: I0318 09:06:31.071819 28766 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:31.073639 master-0 kubenswrapper[28766]: I0318 09:06:31.073545 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:31.075848 master-0 kubenswrapper[28766]: I0318 09:06:31.075813 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 09:06:31.550633 master-0 kubenswrapper[28766]: I0318 09:06:31.550512 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:31.551034 master-0 kubenswrapper[28766]: I0318 09:06:31.550635 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:31.595545 master-0 kubenswrapper[28766]: I0318 09:06:31.595459 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:06:31.597147 master-0 kubenswrapper[28766]: I0318 09:06:31.597023 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:31.691799 master-0 kubenswrapper[28766]: I0318 09:06:31.691540 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-var-lock\") pod \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " Mar 18 09:06:31.691799 master-0 kubenswrapper[28766]: I0318 09:06:31.691647 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kubelet-dir\") pod \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " Mar 18 09:06:31.691799 master-0 kubenswrapper[28766]: I0318 09:06:31.691697 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kube-api-access\") pod \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\" (UID: \"b5d596ea-c73d-4619-b3a5-fd52d3bebedd\") " Mar 18 09:06:31.691799 master-0 kubenswrapper[28766]: I0318 09:06:31.691718 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-var-lock" (OuterVolumeSpecName: "var-lock") pod "b5d596ea-c73d-4619-b3a5-fd52d3bebedd" (UID: "b5d596ea-c73d-4619-b3a5-fd52d3bebedd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:31.692355 master-0 kubenswrapper[28766]: I0318 09:06:31.691941 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5d596ea-c73d-4619-b3a5-fd52d3bebedd" (UID: "b5d596ea-c73d-4619-b3a5-fd52d3bebedd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:31.692584 master-0 kubenswrapper[28766]: I0318 09:06:31.692534 28766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:31.692584 master-0 kubenswrapper[28766]: I0318 09:06:31.692560 28766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:31.696629 master-0 kubenswrapper[28766]: I0318 09:06:31.696550 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5d596ea-c73d-4619-b3a5-fd52d3bebedd" (UID: "b5d596ea-c73d-4619-b3a5-fd52d3bebedd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:06:31.795180 master-0 kubenswrapper[28766]: I0318 09:06:31.795049 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d596ea-c73d-4619-b3a5-fd52d3bebedd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:31.949982 master-0 kubenswrapper[28766]: E0318 09:06:31.947087 28766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189de43f9fd25210 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:06:30.169276944 +0000 UTC m=+143.183535660,LastTimestamp:2026-03-18 09:06:30.169276944 +0000 UTC m=+143.183535660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:06:32.085323 master-0 kubenswrapper[28766]: I0318 09:06:32.085256 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"b5d596ea-c73d-4619-b3a5-fd52d3bebedd","Type":"ContainerDied","Data":"e73afc93fb9b0bddb39adc1514581fbef6f1b62a1f557d618c36e67b1eb65a42"} Mar 18 09:06:32.085323 master-0 kubenswrapper[28766]: I0318 09:06:32.085316 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e73afc93fb9b0bddb39adc1514581fbef6f1b62a1f557d618c36e67b1eb65a42" Mar 18 09:06:32.085629 master-0 kubenswrapper[28766]: I0318 09:06:32.085360 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:06:32.144032 master-0 kubenswrapper[28766]: I0318 09:06:32.143938 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:32.236159 master-0 kubenswrapper[28766]: I0318 09:06:32.236129 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 09:06:32.237190 master-0 kubenswrapper[28766]: I0318 09:06:32.237170 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:32.238295 master-0 kubenswrapper[28766]: I0318 09:06:32.238238 28766 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:32.238690 master-0 kubenswrapper[28766]: I0318 09:06:32.238655 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:32.304711 master-0 kubenswrapper[28766]: I0318 09:06:32.304569 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 09:06:32.305120 master-0 kubenswrapper[28766]: I0318 09:06:32.304978 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 09:06:32.305120 master-0 kubenswrapper[28766]: I0318 09:06:32.305027 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 09:06:32.305363 master-0 kubenswrapper[28766]: I0318 09:06:32.305197 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:32.305519 master-0 kubenswrapper[28766]: I0318 09:06:32.305466 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:32.305737 master-0 kubenswrapper[28766]: I0318 09:06:32.305664 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:32.306626 master-0 kubenswrapper[28766]: I0318 09:06:32.306594 28766 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:32.306811 master-0 kubenswrapper[28766]: I0318 09:06:32.306783 28766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:32.306986 master-0 kubenswrapper[28766]: I0318 09:06:32.306964 28766 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:33.103015 master-0 kubenswrapper[28766]: I0318 09:06:33.102935 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 09:06:33.104683 master-0 kubenswrapper[28766]: I0318 09:06:33.104524 28766 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188" exitCode=0 Mar 18 09:06:33.104683 master-0 kubenswrapper[28766]: I0318 09:06:33.104658 28766 scope.go:117] "RemoveContainer" containerID="fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b" Mar 18 09:06:33.104822 master-0 kubenswrapper[28766]: I0318 09:06:33.104762 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:33.126921 master-0 kubenswrapper[28766]: I0318 09:06:33.126841 28766 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:33.128126 master-0 kubenswrapper[28766]: I0318 09:06:33.128065 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:33.130452 master-0 kubenswrapper[28766]: I0318 09:06:33.130404 28766 scope.go:117] "RemoveContainer" containerID="e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c" Mar 18 09:06:33.152624 master-0 kubenswrapper[28766]: I0318 09:06:33.152179 28766 scope.go:117] "RemoveContainer" containerID="ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89" Mar 18 09:06:33.178241 master-0 kubenswrapper[28766]: I0318 09:06:33.178011 28766 scope.go:117] "RemoveContainer" containerID="df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0" Mar 18 09:06:33.200619 master-0 kubenswrapper[28766]: I0318 09:06:33.200568 28766 scope.go:117] "RemoveContainer" containerID="d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188" Mar 18 09:06:33.226835 master-0 kubenswrapper[28766]: I0318 09:06:33.226795 28766 scope.go:117] "RemoveContainer" containerID="8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285" Mar 18 09:06:33.245630 master-0 kubenswrapper[28766]: I0318 09:06:33.245547 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" path="/var/lib/kubelet/pods/b45ea2ef1cf2bc9d1d994d6538ae0a64/volumes" Mar 18 09:06:33.254825 master-0 kubenswrapper[28766]: I0318 09:06:33.254789 28766 scope.go:117] "RemoveContainer" containerID="fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b" Mar 18 09:06:33.255633 master-0 kubenswrapper[28766]: E0318 09:06:33.255589 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b\": container with ID starting with fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b not found: ID does not exist" containerID="fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b" Mar 18 09:06:33.255674 master-0 kubenswrapper[28766]: I0318 09:06:33.255643 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b"} err="failed to get container status \"fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b\": rpc error: code = NotFound desc = could not find container \"fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b\": container with ID starting with fd04c0ae7c08b8198597e5502af97eb5a8cb5c68baa45502becc03ff771f706b not found: ID does not exist" Mar 18 09:06:33.255720 master-0 kubenswrapper[28766]: I0318 09:06:33.255677 28766 scope.go:117] "RemoveContainer" containerID="e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c" Mar 18 09:06:33.256435 master-0 kubenswrapper[28766]: E0318 09:06:33.256385 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c\": container with ID starting with e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c not found: ID does not exist" containerID="e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c" Mar 18 09:06:33.256561 master-0 kubenswrapper[28766]: I0318 09:06:33.256429 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c"} err="failed to get container status \"e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c\": rpc error: code = NotFound desc = could not find container \"e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c\": container with ID starting with e4396183e575749b6e65190aef719e2f4e761a5fd9efc71cdeac5b52873a9d9c not found: ID does not exist" Mar 18 09:06:33.256561 master-0 kubenswrapper[28766]: I0318 09:06:33.256461 28766 scope.go:117] "RemoveContainer" containerID="ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89" Mar 18 09:06:33.256873 master-0 kubenswrapper[28766]: E0318 09:06:33.256820 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89\": container with ID starting with ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89 not found: ID does not exist" containerID="ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89" Mar 18 09:06:33.257149 master-0 kubenswrapper[28766]: I0318 09:06:33.257111 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89"} err="failed to get container status \"ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89\": rpc error: code = NotFound desc = could not find container \"ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89\": container with ID starting with ba61c781d931a93859100045372a5a8e13a1a32f14d2e8186f666949b5bdcb89 not found: ID does not exist" Mar 18 09:06:33.257191 master-0 kubenswrapper[28766]: I0318 09:06:33.257146 28766 scope.go:117] "RemoveContainer" containerID="df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0" Mar 18 09:06:33.257696 master-0 kubenswrapper[28766]: E0318 09:06:33.257642 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0\": container with ID starting with df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0 not found: ID does not exist" containerID="df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0" Mar 18 09:06:33.257765 master-0 kubenswrapper[28766]: I0318 09:06:33.257722 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0"} err="failed to get container status \"df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0\": rpc error: code = NotFound desc = could not find container \"df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0\": container with ID starting with df07e7ada686f3dcb49b6fa7f799e0d29c819ae08385e2d34ea6c92c3640e4b0 not found: ID does not exist" Mar 18 09:06:33.257807 master-0 kubenswrapper[28766]: I0318 09:06:33.257790 28766 scope.go:117] "RemoveContainer" containerID="d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188" Mar 18 09:06:33.258414 master-0 kubenswrapper[28766]: E0318 09:06:33.258381 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188\": container with ID starting with d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188 not found: ID does not exist" containerID="d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188" Mar 18 09:06:33.258492 master-0 kubenswrapper[28766]: I0318 09:06:33.258417 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188"} err="failed to get container status \"d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188\": rpc error: code = NotFound desc = could not find container \"d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188\": container with ID starting with d359529c6d104b531cb0409c7a4d2398d18ab9d523652299f34b9fc19dff3188 not found: ID does not exist" Mar 18 09:06:33.258492 master-0 kubenswrapper[28766]: I0318 09:06:33.258449 28766 scope.go:117] "RemoveContainer" containerID="8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285" Mar 18 09:06:33.259032 master-0 kubenswrapper[28766]: E0318 09:06:33.258992 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285\": container with ID starting with 8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285 not found: ID does not exist" containerID="8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285" Mar 18 09:06:33.259097 master-0 kubenswrapper[28766]: I0318 09:06:33.259026 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285"} err="failed to get container status \"8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285\": rpc error: code = NotFound desc = could not find container \"8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285\": container with ID starting with 8f346ba585e275f6daeb7ee0b1f9dbc8a6626d795dda146132cd1c080ea2a285 not found: ID does not exist" Mar 18 09:06:35.990594 master-0 kubenswrapper[28766]: I0318 09:06:35.990382 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:35.990594 master-0 kubenswrapper[28766]: I0318 09:06:35.990514 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:06:35.993487 master-0 kubenswrapper[28766]: I0318 09:06:35.993423 28766 patch_prober.go:28] interesting pod/console-5644577ff9-fncm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 18 09:06:35.993611 master-0 kubenswrapper[28766]: I0318 09:06:35.993521 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5644577ff9-fncm4" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 18 09:06:37.242318 master-0 kubenswrapper[28766]: I0318 09:06:37.242236 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:39.896885 master-0 kubenswrapper[28766]: E0318 09:06:39.896722 28766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:39.897968 master-0 kubenswrapper[28766]: E0318 09:06:39.897914 28766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:39.899721 master-0 kubenswrapper[28766]: E0318 09:06:39.899632 28766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:39.900574 master-0 kubenswrapper[28766]: E0318 09:06:39.900522 28766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:39.901116 master-0 kubenswrapper[28766]: E0318 09:06:39.901086 28766 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:39.901190 master-0 kubenswrapper[28766]: I0318 09:06:39.901124 28766 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 09:06:39.903054 master-0 kubenswrapper[28766]: E0318 09:06:39.902978 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 09:06:40.104599 master-0 kubenswrapper[28766]: E0318 09:06:40.104514 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 09:06:40.505998 master-0 kubenswrapper[28766]: E0318 09:06:40.505931 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 09:06:41.308578 master-0 kubenswrapper[28766]: E0318 09:06:41.308525 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 09:06:41.551295 master-0 kubenswrapper[28766]: I0318 09:06:41.551230 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:41.551585 master-0 kubenswrapper[28766]: I0318 09:06:41.551303 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:41.949636 master-0 kubenswrapper[28766]: E0318 09:06:41.949395 28766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189de43f9fd25210 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:06:30.169276944 +0000 UTC m=+143.183535660,LastTimestamp:2026-03-18 09:06:30.169276944 +0000 UTC m=+143.183535660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:06:42.909888 master-0 kubenswrapper[28766]: E0318 09:06:42.909796 28766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 09:06:43.210242 master-0 kubenswrapper[28766]: I0318 09:06:43.210080 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_221b44bcdfcd6cb77b8e2c3e2f0f2d4d/kube-controller-manager/0.log" Mar 18 09:06:43.210242 master-0 kubenswrapper[28766]: I0318 09:06:43.210174 28766 generic.go:334] "Generic (PLEG): container finished" podID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerID="d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2" exitCode=1 Mar 18 09:06:43.210729 master-0 kubenswrapper[28766]: I0318 09:06:43.210241 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerDied","Data":"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2"} Mar 18 09:06:43.211136 master-0 kubenswrapper[28766]: I0318 09:06:43.211092 28766 scope.go:117] "RemoveContainer" containerID="d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2" Mar 18 09:06:43.213377 master-0 kubenswrapper[28766]: I0318 09:06:43.212443 28766 status_manager.go:851] "Failed to get status for pod" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:43.213915 master-0 kubenswrapper[28766]: I0318 09:06:43.213482 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:44.226750 master-0 kubenswrapper[28766]: I0318 09:06:44.226687 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_221b44bcdfcd6cb77b8e2c3e2f0f2d4d/kube-controller-manager/0.log" Mar 18 09:06:44.227669 master-0 kubenswrapper[28766]: I0318 09:06:44.226788 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"221b44bcdfcd6cb77b8e2c3e2f0f2d4d","Type":"ContainerStarted","Data":"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971"} Mar 18 09:06:44.228636 master-0 kubenswrapper[28766]: I0318 09:06:44.228562 28766 status_manager.go:851] "Failed to get status for pod" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:44.229563 master-0 kubenswrapper[28766]: I0318 09:06:44.229506 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:44.232992 master-0 kubenswrapper[28766]: I0318 09:06:44.232925 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:44.235215 master-0 kubenswrapper[28766]: I0318 09:06:44.235144 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:44.236159 master-0 kubenswrapper[28766]: I0318 09:06:44.236073 28766 status_manager.go:851] "Failed to get status for pod" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:44.255984 master-0 kubenswrapper[28766]: I0318 09:06:44.255886 28766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="06cc0345-c4e3-479a-b13a-9ab6e35ad397" Mar 18 09:06:44.255984 master-0 kubenswrapper[28766]: I0318 09:06:44.255948 28766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="06cc0345-c4e3-479a-b13a-9ab6e35ad397" Mar 18 09:06:44.257247 master-0 kubenswrapper[28766]: E0318 09:06:44.257178 28766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:44.257960 master-0 kubenswrapper[28766]: I0318 09:06:44.257922 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:44.296077 master-0 kubenswrapper[28766]: W0318 09:06:44.296012 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f502b117c7c8479f7f20848a50fec0.slice/crio-d449ffdd9731596369dce0c49337769aac30402061b537790edf24864bd416f8 WatchSource:0}: Error finding container d449ffdd9731596369dce0c49337769aac30402061b537790edf24864bd416f8: Status 404 returned error can't find the container with id d449ffdd9731596369dce0c49337769aac30402061b537790edf24864bd416f8 Mar 18 09:06:45.244062 master-0 kubenswrapper[28766]: I0318 09:06:45.241812 28766 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="0b57440279a7e1c5042a6058f0ed38a7533bfa0213b3c7920b5bb1f467c68d5e" exitCode=0 Mar 18 09:06:45.258946 master-0 kubenswrapper[28766]: I0318 09:06:45.258823 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerDied","Data":"0b57440279a7e1c5042a6058f0ed38a7533bfa0213b3c7920b5bb1f467c68d5e"} Mar 18 09:06:45.258946 master-0 kubenswrapper[28766]: I0318 09:06:45.258941 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"d449ffdd9731596369dce0c49337769aac30402061b537790edf24864bd416f8"} Mar 18 09:06:45.259524 master-0 kubenswrapper[28766]: I0318 09:06:45.259369 28766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="06cc0345-c4e3-479a-b13a-9ab6e35ad397" Mar 18 09:06:45.259524 master-0 kubenswrapper[28766]: I0318 09:06:45.259395 28766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="06cc0345-c4e3-479a-b13a-9ab6e35ad397" Mar 18 09:06:45.260551 master-0 kubenswrapper[28766]: I0318 09:06:45.260446 28766 status_manager.go:851] "Failed to get status for pod" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:45.260677 master-0 kubenswrapper[28766]: E0318 09:06:45.260612 28766 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:45.261340 master-0 kubenswrapper[28766]: I0318 09:06:45.261255 28766 status_manager.go:851] "Failed to get status for pod" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:45.991162 master-0 kubenswrapper[28766]: I0318 09:06:45.991083 28766 patch_prober.go:28] interesting pod/console-5644577ff9-fncm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 18 09:06:45.991443 master-0 kubenswrapper[28766]: I0318 09:06:45.991166 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5644577ff9-fncm4" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 18 09:06:46.259784 master-0 kubenswrapper[28766]: I0318 09:06:46.259734 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"7b82e48810ffa90897f37769dc1c5fc5e8ad8bd717280c007032fa2614a2fbdb"} Mar 18 09:06:46.259784 master-0 kubenswrapper[28766]: I0318 09:06:46.259781 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"e2d04c45f7c772c016d47623eb5b523e3bf821adb12d8c42d7ed2946630fb273"} Mar 18 09:06:46.259784 master-0 kubenswrapper[28766]: I0318 09:06:46.259791 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"1a6d63398fc7023963a4c17f05ca635d23763fd0a9f62c7f844e2d43e3eb019a"} Mar 18 09:06:47.280880 master-0 kubenswrapper[28766]: I0318 09:06:47.280788 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"32114ab0d4cc26bac543e2aa8fec751304d606d7374333a6d69c0d39fdc29567"} Mar 18 09:06:47.280880 master-0 kubenswrapper[28766]: I0318 09:06:47.280867 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"b7136c6906e66cc30eeb437f136adbff8bfaf7f7203ec244da263538cd56aaff"} Mar 18 09:06:47.281504 master-0 kubenswrapper[28766]: I0318 09:06:47.281178 28766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="06cc0345-c4e3-479a-b13a-9ab6e35ad397" Mar 18 09:06:47.281504 master-0 kubenswrapper[28766]: I0318 09:06:47.281199 28766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="06cc0345-c4e3-479a-b13a-9ab6e35ad397" Mar 18 09:06:47.281504 master-0 kubenswrapper[28766]: I0318 09:06:47.281463 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:48.213771 master-0 kubenswrapper[28766]: I0318 09:06:48.213716 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:06:49.259127 master-0 kubenswrapper[28766]: I0318 09:06:49.258992 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:49.259127 master-0 kubenswrapper[28766]: I0318 09:06:49.259080 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:49.270200 master-0 kubenswrapper[28766]: I0318 09:06:49.270113 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:50.625924 master-0 kubenswrapper[28766]: I0318 09:06:50.625809 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d57b58fd4-tcq7b" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" containerID="cri-o://36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d" gracePeriod=15 Mar 18 09:06:51.274232 master-0 kubenswrapper[28766]: I0318 09:06:51.274187 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d57b58fd4-tcq7b_9c577244-74c7-4a1c-8fec-0a89bd7e3ed1/console/0.log" Mar 18 09:06:51.274453 master-0 kubenswrapper[28766]: I0318 09:06:51.274271 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:06:51.316978 master-0 kubenswrapper[28766]: I0318 09:06:51.316921 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d57b58fd4-tcq7b_9c577244-74c7-4a1c-8fec-0a89bd7e3ed1/console/0.log" Mar 18 09:06:51.316978 master-0 kubenswrapper[28766]: I0318 09:06:51.316973 28766 generic.go:334] "Generic (PLEG): container finished" podID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerID="36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d" exitCode=2 Mar 18 09:06:51.317237 master-0 kubenswrapper[28766]: I0318 09:06:51.317004 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d57b58fd4-tcq7b" event={"ID":"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1","Type":"ContainerDied","Data":"36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d"} Mar 18 09:06:51.317237 master-0 kubenswrapper[28766]: I0318 09:06:51.317036 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d57b58fd4-tcq7b" event={"ID":"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1","Type":"ContainerDied","Data":"af17e3beda13aae51d45aacc7a3397c8b0222a2b4a9d65440dc65d7ee9351292"} Mar 18 09:06:51.317237 master-0 kubenswrapper[28766]: I0318 09:06:51.317056 28766 scope.go:117] "RemoveContainer" containerID="36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d" Mar 18 09:06:51.317237 master-0 kubenswrapper[28766]: I0318 09:06:51.317082 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d57b58fd4-tcq7b" Mar 18 09:06:51.343143 master-0 kubenswrapper[28766]: I0318 09:06:51.342951 28766 scope.go:117] "RemoveContainer" containerID="36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d" Mar 18 09:06:51.343946 master-0 kubenswrapper[28766]: E0318 09:06:51.343889 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d\": container with ID starting with 36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d not found: ID does not exist" containerID="36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d" Mar 18 09:06:51.344023 master-0 kubenswrapper[28766]: I0318 09:06:51.343958 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d"} err="failed to get container status \"36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d\": rpc error: code = NotFound desc = could not find container \"36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d\": container with ID starting with 36c8b71ee86e5c48866948fa02958a43f463f07dcd83e76ff5bb64a1b30db24d not found: ID does not exist" Mar 18 09:06:51.357722 master-0 kubenswrapper[28766]: I0318 09:06:51.357665 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-serving-cert\") pod \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " Mar 18 09:06:51.357993 master-0 kubenswrapper[28766]: I0318 09:06:51.357825 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgbn5\" (UniqueName: \"kubernetes.io/projected/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-kube-api-access-tgbn5\") pod \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " Mar 18 09:06:51.357993 master-0 kubenswrapper[28766]: I0318 09:06:51.357930 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-service-ca\") pod \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " Mar 18 09:06:51.357993 master-0 kubenswrapper[28766]: I0318 09:06:51.357982 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-config\") pod \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " Mar 18 09:06:51.358144 master-0 kubenswrapper[28766]: I0318 09:06:51.358084 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-oauth-serving-cert\") pod \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " Mar 18 09:06:51.358144 master-0 kubenswrapper[28766]: I0318 09:06:51.358130 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-oauth-config\") pod \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\" (UID: \"9c577244-74c7-4a1c-8fec-0a89bd7e3ed1\") " Mar 18 09:06:51.359606 master-0 kubenswrapper[28766]: I0318 09:06:51.359566 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" (UID: "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:51.359895 master-0 kubenswrapper[28766]: I0318 09:06:51.359875 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-config" (OuterVolumeSpecName: "console-config") pod "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" (UID: "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:51.360317 master-0 kubenswrapper[28766]: I0318 09:06:51.360266 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-service-ca" (OuterVolumeSpecName: "service-ca") pod "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" (UID: "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:51.364920 master-0 kubenswrapper[28766]: I0318 09:06:51.362499 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" (UID: "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:51.364920 master-0 kubenswrapper[28766]: I0318 09:06:51.362616 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-kube-api-access-tgbn5" (OuterVolumeSpecName: "kube-api-access-tgbn5") pod "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" (UID: "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1"). InnerVolumeSpecName "kube-api-access-tgbn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:06:51.364920 master-0 kubenswrapper[28766]: I0318 09:06:51.363187 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" (UID: "9c577244-74c7-4a1c-8fec-0a89bd7e3ed1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:51.476172 master-0 kubenswrapper[28766]: I0318 09:06:51.475975 28766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:51.476172 master-0 kubenswrapper[28766]: I0318 09:06:51.476044 28766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:51.476172 master-0 kubenswrapper[28766]: I0318 09:06:51.476067 28766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:51.476172 master-0 kubenswrapper[28766]: I0318 09:06:51.476092 28766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:51.476172 master-0 kubenswrapper[28766]: I0318 09:06:51.476111 28766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:51.476172 master-0 kubenswrapper[28766]: I0318 09:06:51.476129 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgbn5\" (UniqueName: \"kubernetes.io/projected/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1-kube-api-access-tgbn5\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:51.550802 master-0 kubenswrapper[28766]: I0318 09:06:51.550693 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:51.550802 master-0 kubenswrapper[28766]: I0318 09:06:51.550770 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:52.309639 master-0 kubenswrapper[28766]: I0318 09:06:52.309580 28766 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:52.541150 master-0 kubenswrapper[28766]: I0318 09:06:52.541099 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:06:52.548265 master-0 kubenswrapper[28766]: I0318 09:06:52.548204 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:06:52.571283 master-0 kubenswrapper[28766]: I0318 09:06:52.571104 28766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="e4a5e566-08fd-4ec0-ad78-a31d21226d61" Mar 18 09:06:53.334216 master-0 kubenswrapper[28766]: I0318 09:06:53.334097 28766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="06cc0345-c4e3-479a-b13a-9ab6e35ad397" Mar 18 09:06:53.334216 master-0 kubenswrapper[28766]: I0318 09:06:53.334161 28766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="06cc0345-c4e3-479a-b13a-9ab6e35ad397" Mar 18 09:06:53.337776 master-0 kubenswrapper[28766]: I0318 09:06:53.337694 28766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="e4a5e566-08fd-4ec0-ad78-a31d21226d61" Mar 18 09:06:53.338153 master-0 kubenswrapper[28766]: I0318 09:06:53.338096 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:06:55.991436 master-0 kubenswrapper[28766]: I0318 09:06:55.991353 28766 patch_prober.go:28] interesting pod/console-5644577ff9-fncm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 18 09:06:55.991436 master-0 kubenswrapper[28766]: I0318 09:06:55.991434 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5644577ff9-fncm4" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 18 09:07:01.551015 master-0 kubenswrapper[28766]: I0318 09:07:01.550954 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:07:01.551649 master-0 kubenswrapper[28766]: I0318 09:07:01.551054 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:07:01.874529 master-0 kubenswrapper[28766]: I0318 09:07:01.874297 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 09:07:02.252069 master-0 kubenswrapper[28766]: I0318 09:07:02.251973 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 09:07:03.113169 master-0 kubenswrapper[28766]: I0318 09:07:03.113068 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 09:07:03.572550 master-0 kubenswrapper[28766]: I0318 09:07:03.572435 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:07:03.594772 master-0 kubenswrapper[28766]: I0318 09:07:03.594712 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 09:07:03.665044 master-0 kubenswrapper[28766]: I0318 09:07:03.664971 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 09:07:03.702576 master-0 kubenswrapper[28766]: I0318 09:07:03.702516 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:07:03.716943 master-0 kubenswrapper[28766]: I0318 09:07:03.716894 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 09:07:03.750114 master-0 kubenswrapper[28766]: I0318 09:07:03.750022 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-2zcks" Mar 18 09:07:03.753171 master-0 kubenswrapper[28766]: I0318 09:07:03.753129 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 09:07:03.926987 master-0 kubenswrapper[28766]: I0318 09:07:03.926817 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 09:07:04.013741 master-0 kubenswrapper[28766]: I0318 09:07:04.013684 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 09:07:04.164763 master-0 kubenswrapper[28766]: I0318 09:07:04.164693 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 09:07:04.181610 master-0 kubenswrapper[28766]: I0318 09:07:04.181486 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 09:07:04.558324 master-0 kubenswrapper[28766]: I0318 09:07:04.558259 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 09:07:04.947571 master-0 kubenswrapper[28766]: I0318 09:07:04.947432 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 09:07:04.970308 master-0 kubenswrapper[28766]: I0318 09:07:04.970255 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 09:07:05.030780 master-0 kubenswrapper[28766]: I0318 09:07:05.030161 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 09:07:05.034293 master-0 kubenswrapper[28766]: I0318 09:07:05.034245 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:07:05.093177 master-0 kubenswrapper[28766]: I0318 09:07:05.093118 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 09:07:05.169699 master-0 kubenswrapper[28766]: I0318 09:07:05.169656 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 09:07:05.191941 master-0 kubenswrapper[28766]: I0318 09:07:05.191829 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 09:07:05.304314 master-0 kubenswrapper[28766]: I0318 09:07:05.304241 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 09:07:05.373122 master-0 kubenswrapper[28766]: I0318 09:07:05.373064 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 09:07:05.388709 master-0 kubenswrapper[28766]: I0318 09:07:05.388649 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 09:07:05.401641 master-0 kubenswrapper[28766]: I0318 09:07:05.401598 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9s29d" Mar 18 09:07:05.498921 master-0 kubenswrapper[28766]: I0318 09:07:05.498868 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 09:07:05.574561 master-0 kubenswrapper[28766]: I0318 09:07:05.574411 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:07:05.577022 master-0 kubenswrapper[28766]: I0318 09:07:05.576891 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 09:07:05.692418 master-0 kubenswrapper[28766]: I0318 09:07:05.692368 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vc9fv" Mar 18 09:07:05.837880 master-0 kubenswrapper[28766]: I0318 09:07:05.837728 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 09:07:05.908645 master-0 kubenswrapper[28766]: I0318 09:07:05.908569 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9s5l6" Mar 18 09:07:05.990457 master-0 kubenswrapper[28766]: I0318 09:07:05.990372 28766 patch_prober.go:28] interesting pod/console-5644577ff9-fncm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 18 09:07:05.990707 master-0 kubenswrapper[28766]: I0318 09:07:05.990462 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5644577ff9-fncm4" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 18 09:07:06.028286 master-0 kubenswrapper[28766]: I0318 09:07:06.028200 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-gpcfv" Mar 18 09:07:06.181133 master-0 kubenswrapper[28766]: I0318 09:07:06.180422 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:07:06.239257 master-0 kubenswrapper[28766]: I0318 09:07:06.239175 28766 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 09:07:06.246092 master-0 kubenswrapper[28766]: I0318 09:07:06.245928 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-console/console-5d57b58fd4-tcq7b"] Mar 18 09:07:06.246092 master-0 kubenswrapper[28766]: I0318 09:07:06.246002 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:07:06.254623 master-0 kubenswrapper[28766]: I0318 09:07:06.254402 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:07:06.254623 master-0 kubenswrapper[28766]: I0318 09:07:06.254495 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:07:06.317805 master-0 kubenswrapper[28766]: I0318 09:07:06.317321 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=14.317289283 podStartE2EDuration="14.317289283s" podCreationTimestamp="2026-03-18 09:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:06.312240163 +0000 UTC m=+179.326498839" watchObservedRunningTime="2026-03-18 09:07:06.317289283 +0000 UTC m=+179.331547989" Mar 18 09:07:06.366764 master-0 kubenswrapper[28766]: I0318 09:07:06.366707 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 09:07:06.371459 master-0 kubenswrapper[28766]: I0318 09:07:06.371395 28766 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 09:07:06.380121 master-0 kubenswrapper[28766]: I0318 09:07:06.380073 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 09:07:06.605538 master-0 kubenswrapper[28766]: I0318 09:07:06.605385 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 09:07:06.708938 master-0 kubenswrapper[28766]: I0318 09:07:06.708828 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 09:07:06.902745 master-0 kubenswrapper[28766]: I0318 09:07:06.902550 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 09:07:06.940169 master-0 kubenswrapper[28766]: I0318 09:07:06.940062 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 09:07:06.958793 master-0 kubenswrapper[28766]: I0318 09:07:06.958713 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 09:07:07.028401 master-0 kubenswrapper[28766]: I0318 09:07:07.028289 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 09:07:07.038696 master-0 kubenswrapper[28766]: I0318 09:07:07.038636 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:07:07.219653 master-0 kubenswrapper[28766]: I0318 09:07:07.219450 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 09:07:07.247889 master-0 kubenswrapper[28766]: I0318 09:07:07.247798 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" path="/var/lib/kubelet/pods/9c577244-74c7-4a1c-8fec-0a89bd7e3ed1/volumes" Mar 18 09:07:07.258408 master-0 kubenswrapper[28766]: I0318 09:07:07.258350 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 09:07:07.308169 master-0 kubenswrapper[28766]: I0318 09:07:07.308099 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 09:07:07.315440 master-0 kubenswrapper[28766]: I0318 09:07:07.315373 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 09:07:07.330417 master-0 kubenswrapper[28766]: I0318 09:07:07.330356 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 09:07:07.366305 master-0 kubenswrapper[28766]: I0318 09:07:07.366213 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 09:07:07.456506 master-0 kubenswrapper[28766]: I0318 09:07:07.456420 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 09:07:07.467439 master-0 kubenswrapper[28766]: I0318 09:07:07.467345 28766 scope.go:117] "RemoveContainer" containerID="20f67081f1a83df8fa8825fe68b2011f445e7f6dd6a012bd23cbd198b1272dee" Mar 18 09:07:07.517972 master-0 kubenswrapper[28766]: I0318 09:07:07.517887 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 09:07:07.522906 master-0 kubenswrapper[28766]: I0318 09:07:07.522808 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 09:07:07.565248 master-0 kubenswrapper[28766]: I0318 09:07:07.565150 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:07:07.594365 master-0 kubenswrapper[28766]: I0318 09:07:07.594298 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 09:07:07.631164 master-0 kubenswrapper[28766]: I0318 09:07:07.631075 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 09:07:07.646983 master-0 kubenswrapper[28766]: I0318 09:07:07.646921 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 09:07:07.687910 master-0 kubenswrapper[28766]: I0318 09:07:07.687827 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 09:07:07.744665 master-0 kubenswrapper[28766]: I0318 09:07:07.744558 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 09:07:07.756105 master-0 kubenswrapper[28766]: I0318 09:07:07.756024 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 09:07:07.783829 master-0 kubenswrapper[28766]: I0318 09:07:07.783775 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kmxfz" Mar 18 09:07:07.803496 master-0 kubenswrapper[28766]: I0318 09:07:07.803429 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 09:07:07.968394 master-0 kubenswrapper[28766]: I0318 09:07:07.968329 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 09:07:08.036167 master-0 kubenswrapper[28766]: I0318 09:07:08.036005 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-2wdmv" Mar 18 09:07:08.039666 master-0 kubenswrapper[28766]: I0318 09:07:08.039628 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 09:07:08.104298 master-0 kubenswrapper[28766]: I0318 09:07:08.104236 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 09:07:08.143486 master-0 kubenswrapper[28766]: I0318 09:07:08.143390 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 09:07:08.238064 master-0 kubenswrapper[28766]: I0318 09:07:08.237973 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-zr4v5" Mar 18 09:07:08.240803 master-0 kubenswrapper[28766]: I0318 09:07:08.240753 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 09:07:08.264338 master-0 kubenswrapper[28766]: I0318 09:07:08.264274 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 09:07:08.315811 master-0 kubenswrapper[28766]: I0318 09:07:08.315664 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 09:07:08.354427 master-0 kubenswrapper[28766]: I0318 09:07:08.354363 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 09:07:08.462507 master-0 kubenswrapper[28766]: I0318 09:07:08.462435 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 09:07:08.510756 master-0 kubenswrapper[28766]: I0318 09:07:08.510677 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 09:07:08.515258 master-0 kubenswrapper[28766]: I0318 09:07:08.515201 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 09:07:08.525748 master-0 kubenswrapper[28766]: I0318 09:07:08.525682 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 09:07:08.670714 master-0 kubenswrapper[28766]: I0318 09:07:08.670480 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 09:07:08.690932 master-0 kubenswrapper[28766]: I0318 09:07:08.690827 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 09:07:08.782608 master-0 kubenswrapper[28766]: I0318 09:07:08.782508 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 09:07:08.858490 master-0 kubenswrapper[28766]: I0318 09:07:08.858416 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 09:07:08.891262 master-0 kubenswrapper[28766]: I0318 09:07:08.891188 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 09:07:09.019274 master-0 kubenswrapper[28766]: I0318 09:07:09.019209 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 09:07:09.040432 master-0 kubenswrapper[28766]: I0318 09:07:09.040362 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 09:07:09.081033 master-0 kubenswrapper[28766]: I0318 09:07:09.080936 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 09:07:09.099110 master-0 kubenswrapper[28766]: I0318 09:07:09.099047 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 09:07:09.124589 master-0 kubenswrapper[28766]: I0318 09:07:09.124533 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 09:07:09.156092 master-0 kubenswrapper[28766]: I0318 09:07:09.156017 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-jzd99" Mar 18 09:07:09.165785 master-0 kubenswrapper[28766]: I0318 09:07:09.165716 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-prkn7" Mar 18 09:07:09.250224 master-0 kubenswrapper[28766]: I0318 09:07:09.250185 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 09:07:09.265627 master-0 kubenswrapper[28766]: I0318 09:07:09.265579 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-jdt5h" Mar 18 09:07:09.294148 master-0 kubenswrapper[28766]: I0318 09:07:09.294017 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 09:07:09.365102 master-0 kubenswrapper[28766]: I0318 09:07:09.365049 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-as91djiheslg2" Mar 18 09:07:09.368405 master-0 kubenswrapper[28766]: I0318 09:07:09.368308 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 09:07:09.369418 master-0 kubenswrapper[28766]: I0318 09:07:09.369378 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:07:09.371256 master-0 kubenswrapper[28766]: I0318 09:07:09.371231 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 09:07:09.386616 master-0 kubenswrapper[28766]: I0318 09:07:09.386543 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 09:07:09.401513 master-0 kubenswrapper[28766]: I0318 09:07:09.401441 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 09:07:09.429933 master-0 kubenswrapper[28766]: I0318 09:07:09.429831 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 09:07:09.433089 master-0 kubenswrapper[28766]: I0318 09:07:09.433067 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 09:07:09.440503 master-0 kubenswrapper[28766]: I0318 09:07:09.440458 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 09:07:09.511878 master-0 kubenswrapper[28766]: I0318 09:07:09.510823 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 09:07:09.575882 master-0 kubenswrapper[28766]: I0318 09:07:09.575743 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 09:07:09.600569 master-0 kubenswrapper[28766]: I0318 09:07:09.600513 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 09:07:09.706083 master-0 kubenswrapper[28766]: I0318 09:07:09.706009 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-222ht" Mar 18 09:07:09.708174 master-0 kubenswrapper[28766]: I0318 09:07:09.708133 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 09:07:09.733283 master-0 kubenswrapper[28766]: I0318 09:07:09.733196 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 09:07:09.746278 master-0 kubenswrapper[28766]: I0318 09:07:09.746217 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 09:07:09.791839 master-0 kubenswrapper[28766]: I0318 09:07:09.791769 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 09:07:09.831640 master-0 kubenswrapper[28766]: I0318 09:07:09.831512 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 09:07:09.847663 master-0 kubenswrapper[28766]: I0318 09:07:09.847591 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 09:07:09.862344 master-0 kubenswrapper[28766]: I0318 09:07:09.862252 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 09:07:09.870758 master-0 kubenswrapper[28766]: I0318 09:07:09.870677 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 09:07:09.885203 master-0 kubenswrapper[28766]: I0318 09:07:09.885127 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-k5mpr" Mar 18 09:07:09.955616 master-0 kubenswrapper[28766]: I0318 09:07:09.955544 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 09:07:09.961321 master-0 kubenswrapper[28766]: I0318 09:07:09.961255 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 09:07:09.993670 master-0 kubenswrapper[28766]: I0318 09:07:09.993593 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 09:07:10.016499 master-0 kubenswrapper[28766]: I0318 09:07:10.016413 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 09:07:10.048607 master-0 kubenswrapper[28766]: I0318 09:07:10.048526 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 09:07:10.062595 master-0 kubenswrapper[28766]: I0318 09:07:10.062534 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 09:07:10.071247 master-0 kubenswrapper[28766]: I0318 09:07:10.071173 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 09:07:10.219216 master-0 kubenswrapper[28766]: I0318 09:07:10.215988 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 09:07:10.250978 master-0 kubenswrapper[28766]: I0318 09:07:10.250899 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 09:07:10.282507 master-0 kubenswrapper[28766]: I0318 09:07:10.282445 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 09:07:10.295569 master-0 kubenswrapper[28766]: I0318 09:07:10.295281 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 09:07:10.312570 master-0 kubenswrapper[28766]: I0318 09:07:10.312516 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 09:07:10.321442 master-0 kubenswrapper[28766]: I0318 09:07:10.321373 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 09:07:10.365830 master-0 kubenswrapper[28766]: I0318 09:07:10.365757 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 09:07:10.473450 master-0 kubenswrapper[28766]: I0318 09:07:10.473277 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 09:07:10.517961 master-0 kubenswrapper[28766]: I0318 09:07:10.517899 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 09:07:10.518432 master-0 kubenswrapper[28766]: I0318 09:07:10.518373 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 09:07:10.681250 master-0 kubenswrapper[28766]: I0318 09:07:10.681164 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 09:07:10.740335 master-0 kubenswrapper[28766]: I0318 09:07:10.740259 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 09:07:10.770094 master-0 kubenswrapper[28766]: I0318 09:07:10.770031 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 09:07:10.798727 master-0 kubenswrapper[28766]: I0318 09:07:10.798655 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6n58x" Mar 18 09:07:10.820334 master-0 kubenswrapper[28766]: I0318 09:07:10.820267 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 09:07:10.823523 master-0 kubenswrapper[28766]: I0318 09:07:10.823477 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 09:07:10.953882 master-0 kubenswrapper[28766]: I0318 09:07:10.953776 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 09:07:11.009133 master-0 kubenswrapper[28766]: I0318 09:07:11.008965 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 09:07:11.020650 master-0 kubenswrapper[28766]: I0318 09:07:11.020582 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-74fh5" Mar 18 09:07:11.033987 master-0 kubenswrapper[28766]: I0318 09:07:11.033925 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 09:07:11.034784 master-0 kubenswrapper[28766]: I0318 09:07:11.034724 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 09:07:11.047349 master-0 kubenswrapper[28766]: I0318 09:07:11.047195 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:07:11.063918 master-0 kubenswrapper[28766]: I0318 09:07:11.063831 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 09:07:11.067306 master-0 kubenswrapper[28766]: I0318 09:07:11.067231 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 09:07:11.130057 master-0 kubenswrapper[28766]: I0318 09:07:11.129975 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 09:07:11.163794 master-0 kubenswrapper[28766]: I0318 09:07:11.163726 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 09:07:11.189845 master-0 kubenswrapper[28766]: I0318 09:07:11.189755 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-59m7s" Mar 18 09:07:11.217122 master-0 kubenswrapper[28766]: I0318 09:07:11.217063 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 09:07:11.274539 master-0 kubenswrapper[28766]: I0318 09:07:11.274373 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 09:07:11.310092 master-0 kubenswrapper[28766]: I0318 09:07:11.310032 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 09:07:11.337136 master-0 kubenswrapper[28766]: I0318 09:07:11.337083 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 09:07:11.353540 master-0 kubenswrapper[28766]: I0318 09:07:11.353460 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 09:07:11.394773 master-0 kubenswrapper[28766]: I0318 09:07:11.394663 28766 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 09:07:11.480291 master-0 kubenswrapper[28766]: I0318 09:07:11.480235 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 09:07:11.550466 master-0 kubenswrapper[28766]: I0318 09:07:11.550257 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:07:11.550466 master-0 kubenswrapper[28766]: I0318 09:07:11.550353 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:07:11.600962 master-0 kubenswrapper[28766]: I0318 09:07:11.599470 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 09:07:11.625097 master-0 kubenswrapper[28766]: I0318 09:07:11.625035 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 09:07:11.788442 master-0 kubenswrapper[28766]: I0318 09:07:11.788375 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 09:07:11.838502 master-0 kubenswrapper[28766]: I0318 09:07:11.838361 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:07:11.864748 master-0 kubenswrapper[28766]: I0318 09:07:11.864682 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 09:07:11.908924 master-0 kubenswrapper[28766]: I0318 09:07:11.908825 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 09:07:11.926285 master-0 kubenswrapper[28766]: I0318 09:07:11.926138 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 09:07:11.927548 master-0 kubenswrapper[28766]: I0318 09:07:11.926517 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 09:07:11.955814 master-0 kubenswrapper[28766]: I0318 09:07:11.955742 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 09:07:11.999274 master-0 kubenswrapper[28766]: I0318 09:07:11.999201 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 09:07:12.000930 master-0 kubenswrapper[28766]: I0318 09:07:12.000893 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-68m6c" Mar 18 09:07:12.014903 master-0 kubenswrapper[28766]: I0318 09:07:12.013213 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:07:12.092771 master-0 kubenswrapper[28766]: I0318 09:07:12.092256 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 09:07:12.148929 master-0 kubenswrapper[28766]: I0318 09:07:12.148815 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 09:07:12.219407 master-0 kubenswrapper[28766]: I0318 09:07:12.219327 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 09:07:12.242828 master-0 kubenswrapper[28766]: I0318 09:07:12.242714 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 09:07:12.254120 master-0 kubenswrapper[28766]: I0318 09:07:12.254024 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 09:07:12.287604 master-0 kubenswrapper[28766]: I0318 09:07:12.287538 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 09:07:12.331410 master-0 kubenswrapper[28766]: I0318 09:07:12.331339 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 09:07:12.331410 master-0 kubenswrapper[28766]: I0318 09:07:12.331382 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 09:07:12.347445 master-0 kubenswrapper[28766]: I0318 09:07:12.347270 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 09:07:12.357645 master-0 kubenswrapper[28766]: I0318 09:07:12.357582 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 09:07:12.386672 master-0 kubenswrapper[28766]: I0318 09:07:12.386583 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 09:07:12.396610 master-0 kubenswrapper[28766]: I0318 09:07:12.396544 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 09:07:12.437079 master-0 kubenswrapper[28766]: I0318 09:07:12.436961 28766 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 09:07:12.499887 master-0 kubenswrapper[28766]: I0318 09:07:12.499736 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-dcx6f" Mar 18 09:07:12.598488 master-0 kubenswrapper[28766]: I0318 09:07:12.598287 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-qpbvp" Mar 18 09:07:12.600453 master-0 kubenswrapper[28766]: I0318 09:07:12.600353 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 09:07:12.785721 master-0 kubenswrapper[28766]: I0318 09:07:12.785664 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 09:07:12.858007 master-0 kubenswrapper[28766]: I0318 09:07:12.857792 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 09:07:12.993341 master-0 kubenswrapper[28766]: I0318 09:07:12.993245 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 09:07:13.026365 master-0 kubenswrapper[28766]: I0318 09:07:13.026306 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 09:07:13.267698 master-0 kubenswrapper[28766]: I0318 09:07:13.267615 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 09:07:13.303147 master-0 kubenswrapper[28766]: I0318 09:07:13.302741 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jr5t6" Mar 18 09:07:13.452982 master-0 kubenswrapper[28766]: I0318 09:07:13.452926 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 09:07:13.453243 master-0 kubenswrapper[28766]: I0318 09:07:13.453149 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 09:07:13.529604 master-0 kubenswrapper[28766]: I0318 09:07:13.529538 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:07:13.586687 master-0 kubenswrapper[28766]: I0318 09:07:13.586635 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 09:07:13.599003 master-0 kubenswrapper[28766]: I0318 09:07:13.598952 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-dtxm4" Mar 18 09:07:13.738474 master-0 kubenswrapper[28766]: I0318 09:07:13.738419 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 09:07:13.755523 master-0 kubenswrapper[28766]: I0318 09:07:13.755473 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 09:07:13.770145 master-0 kubenswrapper[28766]: I0318 09:07:13.770100 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 09:07:13.781925 master-0 kubenswrapper[28766]: I0318 09:07:13.781744 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 09:07:13.800724 master-0 kubenswrapper[28766]: I0318 09:07:13.800663 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 09:07:13.832114 master-0 kubenswrapper[28766]: I0318 09:07:13.832021 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 09:07:13.928392 master-0 kubenswrapper[28766]: I0318 09:07:13.928305 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2w2dp" Mar 18 09:07:13.966507 master-0 kubenswrapper[28766]: I0318 09:07:13.966441 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:07:13.988737 master-0 kubenswrapper[28766]: I0318 09:07:13.988639 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 09:07:14.021346 master-0 kubenswrapper[28766]: I0318 09:07:14.021255 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 09:07:14.025297 master-0 kubenswrapper[28766]: I0318 09:07:14.025248 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 09:07:14.031392 master-0 kubenswrapper[28766]: I0318 09:07:14.030715 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 09:07:14.051735 master-0 kubenswrapper[28766]: I0318 09:07:14.051154 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 09:07:14.202568 master-0 kubenswrapper[28766]: I0318 09:07:14.202515 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 09:07:14.311085 master-0 kubenswrapper[28766]: I0318 09:07:14.310950 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 09:07:14.335499 master-0 kubenswrapper[28766]: I0318 09:07:14.335457 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 09:07:14.385647 master-0 kubenswrapper[28766]: I0318 09:07:14.385563 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 09:07:14.388635 master-0 kubenswrapper[28766]: I0318 09:07:14.388577 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 09:07:14.396649 master-0 kubenswrapper[28766]: I0318 09:07:14.396562 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 09:07:14.401414 master-0 kubenswrapper[28766]: I0318 09:07:14.401350 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 09:07:14.440759 master-0 kubenswrapper[28766]: I0318 09:07:14.440673 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:07:14.449534 master-0 kubenswrapper[28766]: I0318 09:07:14.449488 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 09:07:14.550701 master-0 kubenswrapper[28766]: I0318 09:07:14.550592 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 09:07:14.575283 master-0 kubenswrapper[28766]: I0318 09:07:14.575130 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 09:07:14.582970 master-0 kubenswrapper[28766]: I0318 09:07:14.582917 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 09:07:14.638768 master-0 kubenswrapper[28766]: I0318 09:07:14.638664 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:07:14.726348 master-0 kubenswrapper[28766]: I0318 09:07:14.726275 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 09:07:14.812751 master-0 kubenswrapper[28766]: I0318 09:07:14.812693 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-svhdx" Mar 18 09:07:14.816028 master-0 kubenswrapper[28766]: I0318 09:07:14.815966 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 09:07:14.883912 master-0 kubenswrapper[28766]: I0318 09:07:14.883677 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 09:07:14.945641 master-0 kubenswrapper[28766]: I0318 09:07:14.945571 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 09:07:14.989893 master-0 kubenswrapper[28766]: I0318 09:07:14.989790 28766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:07:14.990205 master-0 kubenswrapper[28766]: I0318 09:07:14.990129 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" containerID="cri-o://1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb" gracePeriod=5 Mar 18 09:07:15.044814 master-0 kubenswrapper[28766]: I0318 09:07:15.044747 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-6tztw" Mar 18 09:07:15.059443 master-0 kubenswrapper[28766]: I0318 09:07:15.059385 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 09:07:15.182218 master-0 kubenswrapper[28766]: I0318 09:07:15.181903 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 09:07:15.277963 master-0 kubenswrapper[28766]: I0318 09:07:15.277908 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 09:07:15.396357 master-0 kubenswrapper[28766]: I0318 09:07:15.396301 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 09:07:15.398694 master-0 kubenswrapper[28766]: I0318 09:07:15.398662 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 09:07:15.406494 master-0 kubenswrapper[28766]: I0318 09:07:15.406458 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 09:07:15.408564 master-0 kubenswrapper[28766]: I0318 09:07:15.408536 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 09:07:15.415408 master-0 kubenswrapper[28766]: I0318 09:07:15.415370 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 09:07:15.430107 master-0 kubenswrapper[28766]: I0318 09:07:15.430064 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 09:07:15.457024 master-0 kubenswrapper[28766]: I0318 09:07:15.456904 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 09:07:15.473998 master-0 kubenswrapper[28766]: I0318 09:07:15.473930 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 09:07:15.631680 master-0 kubenswrapper[28766]: I0318 09:07:15.631622 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rtlhv" Mar 18 09:07:15.642480 master-0 kubenswrapper[28766]: I0318 09:07:15.642432 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 09:07:15.711475 master-0 kubenswrapper[28766]: I0318 09:07:15.711292 28766 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 09:07:15.817402 master-0 kubenswrapper[28766]: I0318 09:07:15.817320 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 09:07:15.939656 master-0 kubenswrapper[28766]: I0318 09:07:15.939594 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 09:07:15.991108 master-0 kubenswrapper[28766]: I0318 09:07:15.991037 28766 patch_prober.go:28] interesting pod/console-5644577ff9-fncm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" start-of-body= Mar 18 09:07:15.991503 master-0 kubenswrapper[28766]: I0318 09:07:15.991136 28766 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5644577ff9-fncm4" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.100:8443/health\": dial tcp 10.128.0.100:8443: connect: connection refused" Mar 18 09:07:16.050819 master-0 kubenswrapper[28766]: I0318 09:07:16.050734 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 09:07:16.060546 master-0 kubenswrapper[28766]: I0318 09:07:16.060484 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 09:07:16.061970 master-0 kubenswrapper[28766]: I0318 09:07:16.061938 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 09:07:16.082316 master-0 kubenswrapper[28766]: I0318 09:07:16.082273 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 09:07:16.093393 master-0 kubenswrapper[28766]: I0318 09:07:16.093314 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 09:07:16.353901 master-0 kubenswrapper[28766]: I0318 09:07:16.353744 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 09:07:16.384541 master-0 kubenswrapper[28766]: I0318 09:07:16.384466 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:07:16.441803 master-0 kubenswrapper[28766]: I0318 09:07:16.439410 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 09:07:16.450451 master-0 kubenswrapper[28766]: I0318 09:07:16.450385 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 09:07:16.528381 master-0 kubenswrapper[28766]: I0318 09:07:16.528314 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 09:07:16.571481 master-0 kubenswrapper[28766]: I0318 09:07:16.571411 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-z6dpv" Mar 18 09:07:16.823328 master-0 kubenswrapper[28766]: I0318 09:07:16.823266 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 09:07:16.837162 master-0 kubenswrapper[28766]: I0318 09:07:16.837120 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 09:07:16.845403 master-0 kubenswrapper[28766]: I0318 09:07:16.845353 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 09:07:16.904044 master-0 kubenswrapper[28766]: I0318 09:07:16.903987 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 09:07:16.985029 master-0 kubenswrapper[28766]: I0318 09:07:16.984932 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 09:07:17.077744 master-0 kubenswrapper[28766]: I0318 09:07:17.077557 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 09:07:17.190609 master-0 kubenswrapper[28766]: I0318 09:07:17.190540 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-476ck"] Mar 18 09:07:17.190890 master-0 kubenswrapper[28766]: E0318 09:07:17.190871 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" Mar 18 09:07:17.190890 master-0 kubenswrapper[28766]: I0318 09:07:17.190889 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" Mar 18 09:07:17.190983 master-0 kubenswrapper[28766]: E0318 09:07:17.190914 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 09:07:17.190983 master-0 kubenswrapper[28766]: I0318 09:07:17.190928 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 09:07:17.190983 master-0 kubenswrapper[28766]: E0318 09:07:17.190956 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" containerName="installer" Mar 18 09:07:17.190983 master-0 kubenswrapper[28766]: I0318 09:07:17.190965 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" containerName="installer" Mar 18 09:07:17.191098 master-0 kubenswrapper[28766]: I0318 09:07:17.191085 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c577244-74c7-4a1c-8fec-0a89bd7e3ed1" containerName="console" Mar 18 09:07:17.191130 master-0 kubenswrapper[28766]: I0318 09:07:17.191119 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d596ea-c73d-4619-b3a5-fd52d3bebedd" containerName="installer" Mar 18 09:07:17.191130 master-0 kubenswrapper[28766]: I0318 09:07:17.191129 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 09:07:17.191637 master-0 kubenswrapper[28766]: I0318 09:07:17.191606 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:17.197663 master-0 kubenswrapper[28766]: I0318 09:07:17.197623 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 09:07:17.198702 master-0 kubenswrapper[28766]: I0318 09:07:17.198659 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 09:07:17.210406 master-0 kubenswrapper[28766]: I0318 09:07:17.210353 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-476ck"] Mar 18 09:07:17.265908 master-0 kubenswrapper[28766]: I0318 09:07:17.265782 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 09:07:17.300443 master-0 kubenswrapper[28766]: I0318 09:07:17.300390 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-xs8t8" Mar 18 09:07:17.317328 master-0 kubenswrapper[28766]: I0318 09:07:17.317146 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 09:07:17.346724 master-0 kubenswrapper[28766]: I0318 09:07:17.346610 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 09:07:17.347450 master-0 kubenswrapper[28766]: I0318 09:07:17.347367 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a00bebfb-2c54-4888-9200-a5b96420fd37-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:17.347573 master-0 kubenswrapper[28766]: I0318 09:07:17.347461 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:17.348058 master-0 kubenswrapper[28766]: I0318 09:07:17.348033 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 09:07:17.359495 master-0 kubenswrapper[28766]: I0318 09:07:17.359453 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 09:07:17.449340 master-0 kubenswrapper[28766]: I0318 09:07:17.449116 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a00bebfb-2c54-4888-9200-a5b96420fd37-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:17.450150 master-0 kubenswrapper[28766]: I0318 09:07:17.449384 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:17.450150 master-0 kubenswrapper[28766]: E0318 09:07:17.449593 28766 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:17.450150 master-0 kubenswrapper[28766]: E0318 09:07:17.449718 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert podName:a00bebfb-2c54-4888-9200-a5b96420fd37 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:17.949687347 +0000 UTC m=+190.963946043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-476ck" (UID: "a00bebfb-2c54-4888-9200-a5b96420fd37") : secret "networking-console-plugin-cert" not found Mar 18 09:07:17.450150 master-0 kubenswrapper[28766]: I0318 09:07:17.450065 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a00bebfb-2c54-4888-9200-a5b96420fd37-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:17.474539 master-0 kubenswrapper[28766]: I0318 09:07:17.474487 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bpz6r" Mar 18 09:07:17.546245 master-0 kubenswrapper[28766]: I0318 09:07:17.546092 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 09:07:17.620543 master-0 kubenswrapper[28766]: I0318 09:07:17.620416 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 09:07:17.653821 master-0 kubenswrapper[28766]: I0318 09:07:17.653776 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-s7cph" Mar 18 09:07:17.676473 master-0 kubenswrapper[28766]: I0318 09:07:17.676442 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 09:07:17.764495 master-0 kubenswrapper[28766]: I0318 09:07:17.764337 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:07:17.819314 master-0 kubenswrapper[28766]: I0318 09:07:17.819252 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 09:07:17.819592 master-0 kubenswrapper[28766]: I0318 09:07:17.819344 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 09:07:17.917573 master-0 kubenswrapper[28766]: I0318 09:07:17.917439 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 09:07:17.921777 master-0 kubenswrapper[28766]: I0318 09:07:17.921742 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 09:07:17.956485 master-0 kubenswrapper[28766]: I0318 09:07:17.956413 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:17.957137 master-0 kubenswrapper[28766]: E0318 09:07:17.957105 28766 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:17.957210 master-0 kubenswrapper[28766]: E0318 09:07:17.957164 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert podName:a00bebfb-2c54-4888-9200-a5b96420fd37 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:18.957146303 +0000 UTC m=+191.971404969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-476ck" (UID: "a00bebfb-2c54-4888-9200-a5b96420fd37") : secret "networking-console-plugin-cert" not found Mar 18 09:07:18.190629 master-0 kubenswrapper[28766]: I0318 09:07:18.190510 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 09:07:18.461203 master-0 kubenswrapper[28766]: I0318 09:07:18.461089 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 09:07:18.709182 master-0 kubenswrapper[28766]: I0318 09:07:18.709121 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 09:07:18.773373 master-0 kubenswrapper[28766]: I0318 09:07:18.773308 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 09:07:18.787789 master-0 kubenswrapper[28766]: I0318 09:07:18.787737 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 09:07:18.984048 master-0 kubenswrapper[28766]: I0318 09:07:18.982576 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:18.984048 master-0 kubenswrapper[28766]: E0318 09:07:18.982903 28766 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:18.984048 master-0 kubenswrapper[28766]: E0318 09:07:18.982989 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert podName:a00bebfb-2c54-4888-9200-a5b96420fd37 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:20.982961474 +0000 UTC m=+193.997220170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-476ck" (UID: "a00bebfb-2c54-4888-9200-a5b96420fd37") : secret "networking-console-plugin-cert" not found Mar 18 09:07:20.567308 master-0 kubenswrapper[28766]: I0318 09:07:20.567256 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 18 09:07:20.567940 master-0 kubenswrapper[28766]: I0318 09:07:20.567336 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:07:20.583764 master-0 kubenswrapper[28766]: I0318 09:07:20.583713 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 18 09:07:20.584016 master-0 kubenswrapper[28766]: I0318 09:07:20.583777 28766 generic.go:334] "Generic (PLEG): container finished" podID="85632c1cec8974aa874834e4cfff4c77" containerID="1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb" exitCode=137 Mar 18 09:07:20.584016 master-0 kubenswrapper[28766]: I0318 09:07:20.583834 28766 scope.go:117] "RemoveContainer" containerID="1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb" Mar 18 09:07:20.584016 master-0 kubenswrapper[28766]: I0318 09:07:20.583867 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:07:20.602971 master-0 kubenswrapper[28766]: I0318 09:07:20.602927 28766 scope.go:117] "RemoveContainer" containerID="1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb" Mar 18 09:07:20.603478 master-0 kubenswrapper[28766]: E0318 09:07:20.603441 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb\": container with ID starting with 1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb not found: ID does not exist" containerID="1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb" Mar 18 09:07:20.603544 master-0 kubenswrapper[28766]: I0318 09:07:20.603495 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb"} err="failed to get container status \"1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb\": rpc error: code = NotFound desc = could not find container \"1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb\": container with ID starting with 1b8e5bc18d4a185d0983957e7565a83ea0b52d5da432104795a45ec28523f2cb not found: ID does not exist" Mar 18 09:07:20.710997 master-0 kubenswrapper[28766]: I0318 09:07:20.710819 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 09:07:20.711219 master-0 kubenswrapper[28766]: I0318 09:07:20.711017 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 09:07:20.711219 master-0 kubenswrapper[28766]: I0318 09:07:20.710940 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:20.711320 master-0 kubenswrapper[28766]: I0318 09:07:20.711234 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests" (OuterVolumeSpecName: "manifests") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:20.711320 master-0 kubenswrapper[28766]: I0318 09:07:20.711269 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 09:07:20.711417 master-0 kubenswrapper[28766]: I0318 09:07:20.711301 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock" (OuterVolumeSpecName: "var-lock") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:20.711417 master-0 kubenswrapper[28766]: I0318 09:07:20.711390 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 09:07:20.711507 master-0 kubenswrapper[28766]: I0318 09:07:20.711471 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 09:07:20.711507 master-0 kubenswrapper[28766]: I0318 09:07:20.711480 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log" (OuterVolumeSpecName: "var-log") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:20.712148 master-0 kubenswrapper[28766]: I0318 09:07:20.712116 28766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:20.712148 master-0 kubenswrapper[28766]: I0318 09:07:20.712143 28766 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:20.712264 master-0 kubenswrapper[28766]: I0318 09:07:20.712155 28766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:20.712264 master-0 kubenswrapper[28766]: I0318 09:07:20.712169 28766 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:20.718843 master-0 kubenswrapper[28766]: I0318 09:07:20.718806 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:20.813586 master-0 kubenswrapper[28766]: I0318 09:07:20.813459 28766 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:21.017257 master-0 kubenswrapper[28766]: E0318 09:07:21.017172 28766 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:21.017518 master-0 kubenswrapper[28766]: E0318 09:07:21.017301 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert podName:a00bebfb-2c54-4888-9200-a5b96420fd37 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:25.017272732 +0000 UTC m=+198.031531428 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-476ck" (UID: "a00bebfb-2c54-4888-9200-a5b96420fd37") : secret "networking-console-plugin-cert" not found Mar 18 09:07:21.017924 master-0 kubenswrapper[28766]: I0318 09:07:21.017003 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:21.244143 master-0 kubenswrapper[28766]: I0318 09:07:21.244063 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85632c1cec8974aa874834e4cfff4c77" path="/var/lib/kubelet/pods/85632c1cec8974aa874834e4cfff4c77/volumes" Mar 18 09:07:21.554368 master-0 kubenswrapper[28766]: I0318 09:07:21.554236 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:07:21.557771 master-0 kubenswrapper[28766]: I0318 09:07:21.557723 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:07:25.096263 master-0 kubenswrapper[28766]: I0318 09:07:25.096141 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:25.097522 master-0 kubenswrapper[28766]: E0318 09:07:25.096485 28766 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:25.097522 master-0 kubenswrapper[28766]: E0318 09:07:25.096636 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert podName:a00bebfb-2c54-4888-9200-a5b96420fd37 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:33.096603314 +0000 UTC m=+206.110862010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-476ck" (UID: "a00bebfb-2c54-4888-9200-a5b96420fd37") : secret "networking-console-plugin-cert" not found Mar 18 09:07:25.997445 master-0 kubenswrapper[28766]: I0318 09:07:25.997314 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:07:26.007417 master-0 kubenswrapper[28766]: I0318 09:07:26.007332 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:07:26.113265 master-0 kubenswrapper[28766]: I0318 09:07:26.113190 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bd9677648-tq84g"] Mar 18 09:07:33.131375 master-0 kubenswrapper[28766]: I0318 09:07:33.131287 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:33.132803 master-0 kubenswrapper[28766]: E0318 09:07:33.131558 28766 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:33.132803 master-0 kubenswrapper[28766]: E0318 09:07:33.131626 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert podName:a00bebfb-2c54-4888-9200-a5b96420fd37 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:49.131603217 +0000 UTC m=+222.145861893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-476ck" (UID: "a00bebfb-2c54-4888-9200-a5b96420fd37") : secret "networking-console-plugin-cert" not found Mar 18 09:07:33.367180 master-0 kubenswrapper[28766]: I0318 09:07:33.367093 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 09:07:36.306461 master-0 kubenswrapper[28766]: I0318 09:07:36.306327 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 09:07:38.739424 master-0 kubenswrapper[28766]: I0318 09:07:38.739133 28766 generic.go:334] "Generic (PLEG): container finished" podID="34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe" containerID="6e92c769d9c45cb0821669a8b7574a372860e2d7111a0a59b3e08fac2596304e" exitCode=0 Mar 18 09:07:38.739424 master-0 kubenswrapper[28766]: I0318 09:07:38.739215 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" event={"ID":"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe","Type":"ContainerDied","Data":"6e92c769d9c45cb0821669a8b7574a372860e2d7111a0a59b3e08fac2596304e"} Mar 18 09:07:38.739424 master-0 kubenswrapper[28766]: I0318 09:07:38.739322 28766 scope.go:117] "RemoveContainer" containerID="75d1410d48296cb4f2446dcf35dcfdb58ad3083bc984cecb00db26ae1fc3d758" Mar 18 09:07:38.740687 master-0 kubenswrapper[28766]: I0318 09:07:38.739979 28766 scope.go:117] "RemoveContainer" containerID="6e92c769d9c45cb0821669a8b7574a372860e2d7111a0a59b3e08fac2596304e" Mar 18 09:07:39.752691 master-0 kubenswrapper[28766]: I0318 09:07:39.752590 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" event={"ID":"34f2ec5e-cb68-415b-a9f2-5b7f10fa9bbe","Type":"ContainerStarted","Data":"825a79da6d5decf07fc4ce04e605c5027b9a1ec58b61f34e20a5f55a83b94a4a"} Mar 18 09:07:39.755118 master-0 kubenswrapper[28766]: I0318 09:07:39.755026 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:07:39.762100 master-0 kubenswrapper[28766]: I0318 09:07:39.762041 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-bcwsv" Mar 18 09:07:40.197410 master-0 kubenswrapper[28766]: I0318 09:07:40.197201 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 09:07:43.200925 master-0 kubenswrapper[28766]: I0318 09:07:43.200826 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 09:07:43.676287 master-0 kubenswrapper[28766]: I0318 09:07:43.676220 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:07:49.227268 master-0 kubenswrapper[28766]: I0318 09:07:49.227162 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:07:49.228279 master-0 kubenswrapper[28766]: E0318 09:07:49.227394 28766 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:49.228279 master-0 kubenswrapper[28766]: E0318 09:07:49.227494 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert podName:a00bebfb-2c54-4888-9200-a5b96420fd37 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:21.227471576 +0000 UTC m=+254.241730242 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-476ck" (UID: "a00bebfb-2c54-4888-9200-a5b96420fd37") : secret "networking-console-plugin-cert" not found Mar 18 09:07:51.180377 master-0 kubenswrapper[28766]: I0318 09:07:51.180287 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" containerID="cri-o://4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55" gracePeriod=15 Mar 18 09:07:51.666566 master-0 kubenswrapper[28766]: I0318 09:07:51.665692 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd9677648-tq84g_e3d66c24-e87e-489f-8474-277b2add6768/console/0.log" Mar 18 09:07:51.666566 master-0 kubenswrapper[28766]: I0318 09:07:51.665836 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:07:51.769049 master-0 kubenswrapper[28766]: I0318 09:07:51.768818 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 09:07:51.774231 master-0 kubenswrapper[28766]: I0318 09:07:51.774172 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-service-ca\") pod \"e3d66c24-e87e-489f-8474-277b2add6768\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " Mar 18 09:07:51.774451 master-0 kubenswrapper[28766]: I0318 09:07:51.774406 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8bqn\" (UniqueName: \"kubernetes.io/projected/e3d66c24-e87e-489f-8474-277b2add6768-kube-api-access-v8bqn\") pod \"e3d66c24-e87e-489f-8474-277b2add6768\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " Mar 18 09:07:51.774501 master-0 kubenswrapper[28766]: I0318 09:07:51.774471 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-serving-cert\") pod \"e3d66c24-e87e-489f-8474-277b2add6768\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " Mar 18 09:07:51.774548 master-0 kubenswrapper[28766]: I0318 09:07:51.774523 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-trusted-ca-bundle\") pod \"e3d66c24-e87e-489f-8474-277b2add6768\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " Mar 18 09:07:51.774616 master-0 kubenswrapper[28766]: I0318 09:07:51.774586 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-oauth-config\") pod \"e3d66c24-e87e-489f-8474-277b2add6768\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " Mar 18 09:07:51.774670 master-0 kubenswrapper[28766]: I0318 09:07:51.774628 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-service-ca" (OuterVolumeSpecName: "service-ca") pod "e3d66c24-e87e-489f-8474-277b2add6768" (UID: "e3d66c24-e87e-489f-8474-277b2add6768"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:51.774802 master-0 kubenswrapper[28766]: I0318 09:07:51.774770 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-console-config\") pod \"e3d66c24-e87e-489f-8474-277b2add6768\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " Mar 18 09:07:51.774909 master-0 kubenswrapper[28766]: I0318 09:07:51.774842 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-oauth-serving-cert\") pod \"e3d66c24-e87e-489f-8474-277b2add6768\" (UID: \"e3d66c24-e87e-489f-8474-277b2add6768\") " Mar 18 09:07:51.775425 master-0 kubenswrapper[28766]: I0318 09:07:51.775392 28766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:51.775709 master-0 kubenswrapper[28766]: I0318 09:07:51.775685 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-console-config" (OuterVolumeSpecName: "console-config") pod "e3d66c24-e87e-489f-8474-277b2add6768" (UID: "e3d66c24-e87e-489f-8474-277b2add6768"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:51.775813 master-0 kubenswrapper[28766]: I0318 09:07:51.775756 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e3d66c24-e87e-489f-8474-277b2add6768" (UID: "e3d66c24-e87e-489f-8474-277b2add6768"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:51.776549 master-0 kubenswrapper[28766]: I0318 09:07:51.776466 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e3d66c24-e87e-489f-8474-277b2add6768" (UID: "e3d66c24-e87e-489f-8474-277b2add6768"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:51.778773 master-0 kubenswrapper[28766]: I0318 09:07:51.778736 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d66c24-e87e-489f-8474-277b2add6768-kube-api-access-v8bqn" (OuterVolumeSpecName: "kube-api-access-v8bqn") pod "e3d66c24-e87e-489f-8474-277b2add6768" (UID: "e3d66c24-e87e-489f-8474-277b2add6768"). InnerVolumeSpecName "kube-api-access-v8bqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:07:51.778951 master-0 kubenswrapper[28766]: I0318 09:07:51.778886 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e3d66c24-e87e-489f-8474-277b2add6768" (UID: "e3d66c24-e87e-489f-8474-277b2add6768"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:51.784965 master-0 kubenswrapper[28766]: I0318 09:07:51.784816 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e3d66c24-e87e-489f-8474-277b2add6768" (UID: "e3d66c24-e87e-489f-8474-277b2add6768"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:51.854110 master-0 kubenswrapper[28766]: I0318 09:07:51.854032 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd9677648-tq84g_e3d66c24-e87e-489f-8474-277b2add6768/console/0.log" Mar 18 09:07:51.854461 master-0 kubenswrapper[28766]: I0318 09:07:51.854151 28766 generic.go:334] "Generic (PLEG): container finished" podID="e3d66c24-e87e-489f-8474-277b2add6768" containerID="4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55" exitCode=2 Mar 18 09:07:51.854461 master-0 kubenswrapper[28766]: I0318 09:07:51.854200 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd9677648-tq84g" event={"ID":"e3d66c24-e87e-489f-8474-277b2add6768","Type":"ContainerDied","Data":"4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55"} Mar 18 09:07:51.854461 master-0 kubenswrapper[28766]: I0318 09:07:51.854259 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd9677648-tq84g" event={"ID":"e3d66c24-e87e-489f-8474-277b2add6768","Type":"ContainerDied","Data":"7b41a8fe7360de01c7561668069c56aa5f4182c550f22c465ed5af9e52db53c5"} Mar 18 09:07:51.854461 master-0 kubenswrapper[28766]: I0318 09:07:51.854284 28766 scope.go:117] "RemoveContainer" containerID="4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55" Mar 18 09:07:51.854461 master-0 kubenswrapper[28766]: I0318 09:07:51.854444 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd9677648-tq84g" Mar 18 09:07:51.874525 master-0 kubenswrapper[28766]: I0318 09:07:51.874471 28766 scope.go:117] "RemoveContainer" containerID="4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55" Mar 18 09:07:51.874984 master-0 kubenswrapper[28766]: E0318 09:07:51.874912 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55\": container with ID starting with 4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55 not found: ID does not exist" containerID="4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55" Mar 18 09:07:51.875120 master-0 kubenswrapper[28766]: I0318 09:07:51.875000 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55"} err="failed to get container status \"4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55\": rpc error: code = NotFound desc = could not find container \"4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55\": container with ID starting with 4970abeeaa8b1ae3a4db6508e783a24b87b1e4132fa771ab0840ed593098fb55 not found: ID does not exist" Mar 18 09:07:51.876673 master-0 kubenswrapper[28766]: I0318 09:07:51.876636 28766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:51.876673 master-0 kubenswrapper[28766]: I0318 09:07:51.876666 28766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:51.876673 master-0 kubenswrapper[28766]: I0318 09:07:51.876677 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8bqn\" (UniqueName: \"kubernetes.io/projected/e3d66c24-e87e-489f-8474-277b2add6768-kube-api-access-v8bqn\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:51.876905 master-0 kubenswrapper[28766]: I0318 09:07:51.876687 28766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:51.876905 master-0 kubenswrapper[28766]: I0318 09:07:51.876698 28766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3d66c24-e87e-489f-8474-277b2add6768-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:51.876905 master-0 kubenswrapper[28766]: I0318 09:07:51.876707 28766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3d66c24-e87e-489f-8474-277b2add6768-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:51.936889 master-0 kubenswrapper[28766]: I0318 09:07:51.935961 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bd9677648-tq84g"] Mar 18 09:07:51.940883 master-0 kubenswrapper[28766]: I0318 09:07:51.940785 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-bd9677648-tq84g"] Mar 18 09:07:52.551005 master-0 kubenswrapper[28766]: I0318 09:07:52.550833 28766 patch_prober.go:28] interesting pod/console-bd9677648-tq84g container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.96:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 09:07:52.551927 master-0 kubenswrapper[28766]: I0318 09:07:52.551029 28766 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-bd9677648-tq84g" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:07:53.257412 master-0 kubenswrapper[28766]: I0318 09:07:53.257189 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d66c24-e87e-489f-8474-277b2add6768" path="/var/lib/kubelet/pods/e3d66c24-e87e-489f-8474-277b2add6768/volumes" Mar 18 09:07:55.517229 master-0 kubenswrapper[28766]: I0318 09:07:55.517152 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 09:08:00.055240 master-0 kubenswrapper[28766]: I0318 09:08:00.055108 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-59d6555497-hckn8"] Mar 18 09:08:00.055822 master-0 kubenswrapper[28766]: E0318 09:08:00.055662 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" Mar 18 09:08:00.055822 master-0 kubenswrapper[28766]: I0318 09:08:00.055683 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" Mar 18 09:08:00.055994 master-0 kubenswrapper[28766]: I0318 09:08:00.055933 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d66c24-e87e-489f-8474-277b2add6768" containerName="console" Mar 18 09:08:00.058414 master-0 kubenswrapper[28766]: I0318 09:08:00.058340 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.061265 master-0 kubenswrapper[28766]: I0318 09:08:00.061231 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 09:08:00.061591 master-0 kubenswrapper[28766]: I0318 09:08:00.061574 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 09:08:00.061965 master-0 kubenswrapper[28766]: I0318 09:08:00.061897 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 09:08:00.062262 master-0 kubenswrapper[28766]: I0318 09:08:00.062243 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7lno6poivo43o" Mar 18 09:08:00.062601 master-0 kubenswrapper[28766]: I0318 09:08:00.062584 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 09:08:00.063149 master-0 kubenswrapper[28766]: I0318 09:08:00.063098 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 09:08:00.085150 master-0 kubenswrapper[28766]: I0318 09:08:00.085098 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-59d6555497-hckn8"] Mar 18 09:08:00.107637 master-0 kubenswrapper[28766]: I0318 09:08:00.107546 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.107898 master-0 kubenswrapper[28766]: I0318 09:08:00.107680 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.107898 master-0 kubenswrapper[28766]: I0318 09:08:00.107740 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.107898 master-0 kubenswrapper[28766]: I0318 09:08:00.107792 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-grpc-tls\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.108040 master-0 kubenswrapper[28766]: I0318 09:08:00.107845 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.108133 master-0 kubenswrapper[28766]: I0318 09:08:00.108093 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-tls\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.108230 master-0 kubenswrapper[28766]: I0318 09:08:00.108199 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-metrics-client-ca\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.108585 master-0 kubenswrapper[28766]: I0318 09:08:00.108517 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vms2j\" (UniqueName: \"kubernetes.io/projected/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-kube-api-access-vms2j\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.211014 master-0 kubenswrapper[28766]: I0318 09:08:00.210929 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.211300 master-0 kubenswrapper[28766]: I0318 09:08:00.211029 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.211300 master-0 kubenswrapper[28766]: I0318 09:08:00.211060 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.214132 master-0 kubenswrapper[28766]: I0318 09:08:00.211590 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-grpc-tls\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.214132 master-0 kubenswrapper[28766]: I0318 09:08:00.211695 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.214132 master-0 kubenswrapper[28766]: I0318 09:08:00.211946 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-tls\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.214132 master-0 kubenswrapper[28766]: I0318 09:08:00.212003 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-metrics-client-ca\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.214132 master-0 kubenswrapper[28766]: I0318 09:08:00.212128 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vms2j\" (UniqueName: \"kubernetes.io/projected/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-kube-api-access-vms2j\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.214132 master-0 kubenswrapper[28766]: I0318 09:08:00.213637 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-metrics-client-ca\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.217093 master-0 kubenswrapper[28766]: I0318 09:08:00.217020 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.217516 master-0 kubenswrapper[28766]: I0318 09:08:00.217477 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.217816 master-0 kubenswrapper[28766]: I0318 09:08:00.217747 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-tls\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.218518 master-0 kubenswrapper[28766]: I0318 09:08:00.218471 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.218638 master-0 kubenswrapper[28766]: I0318 09:08:00.218587 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-grpc-tls\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.220403 master-0 kubenswrapper[28766]: I0318 09:08:00.220357 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.232999 master-0 kubenswrapper[28766]: I0318 09:08:00.232909 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vms2j\" (UniqueName: \"kubernetes.io/projected/bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef-kube-api-access-vms2j\") pod \"thanos-querier-59d6555497-hckn8\" (UID: \"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef\") " pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.384709 master-0 kubenswrapper[28766]: I0318 09:08:00.384535 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:00.970658 master-0 kubenswrapper[28766]: I0318 09:08:00.970549 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-59d6555497-hckn8"] Mar 18 09:08:00.973534 master-0 kubenswrapper[28766]: W0318 09:08:00.973475 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcb0b60f_5cc3_4b4b_b209_3d89f2f349ef.slice/crio-5d94598d6033054eec58bfb09096b94d0281bf2ec11a3f6a9de33f071df0b9a2 WatchSource:0}: Error finding container 5d94598d6033054eec58bfb09096b94d0281bf2ec11a3f6a9de33f071df0b9a2: Status 404 returned error can't find the container with id 5d94598d6033054eec58bfb09096b94d0281bf2ec11a3f6a9de33f071df0b9a2 Mar 18 09:08:01.949968 master-0 kubenswrapper[28766]: I0318 09:08:01.949894 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" event={"ID":"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef","Type":"ContainerStarted","Data":"5d94598d6033054eec58bfb09096b94d0281bf2ec11a3f6a9de33f071df0b9a2"} Mar 18 09:08:02.791414 master-0 kubenswrapper[28766]: I0318 09:08:02.788884 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-547c985987-bff72"] Mar 18 09:08:02.791414 master-0 kubenswrapper[28766]: I0318 09:08:02.790530 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.794044 master-0 kubenswrapper[28766]: I0318 09:08:02.793981 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-59f88c66c8-z4c2f"] Mar 18 09:08:02.794321 master-0 kubenswrapper[28766]: I0318 09:08:02.794249 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" podUID="5320a1da-262a-4b1b-93b4-1df9d4c26eec" containerName="metrics-server" containerID="cri-o://8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319" gracePeriod=170 Mar 18 09:08:02.797518 master-0 kubenswrapper[28766]: I0318 09:08:02.797149 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-aacc5bvpcf3e5" Mar 18 09:08:02.821456 master-0 kubenswrapper[28766]: I0318 09:08:02.819224 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-547c985987-bff72"] Mar 18 09:08:02.879574 master-0 kubenswrapper[28766]: I0318 09:08:02.878757 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a67829d2-585d-4140-aaa7-c7551bb714d3-metrics-server-audit-profiles\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.879574 master-0 kubenswrapper[28766]: I0318 09:08:02.878869 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-secret-metrics-client-certs\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.879574 master-0 kubenswrapper[28766]: I0318 09:08:02.878947 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a67829d2-585d-4140-aaa7-c7551bb714d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.879574 master-0 kubenswrapper[28766]: I0318 09:08:02.879178 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8nh4\" (UniqueName: \"kubernetes.io/projected/a67829d2-585d-4140-aaa7-c7551bb714d3-kube-api-access-r8nh4\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.879574 master-0 kubenswrapper[28766]: I0318 09:08:02.879312 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a67829d2-585d-4140-aaa7-c7551bb714d3-audit-log\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.879574 master-0 kubenswrapper[28766]: I0318 09:08:02.879387 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-client-ca-bundle\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.879574 master-0 kubenswrapper[28766]: I0318 09:08:02.879546 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-secret-metrics-server-tls\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.983444 master-0 kubenswrapper[28766]: I0318 09:08:02.983308 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-secret-metrics-server-tls\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.984322 master-0 kubenswrapper[28766]: I0318 09:08:02.984295 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a67829d2-585d-4140-aaa7-c7551bb714d3-metrics-server-audit-profiles\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.984469 master-0 kubenswrapper[28766]: I0318 09:08:02.984445 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-secret-metrics-client-certs\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.984629 master-0 kubenswrapper[28766]: I0318 09:08:02.984613 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a67829d2-585d-4140-aaa7-c7551bb714d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.984746 master-0 kubenswrapper[28766]: I0318 09:08:02.984732 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8nh4\" (UniqueName: \"kubernetes.io/projected/a67829d2-585d-4140-aaa7-c7551bb714d3-kube-api-access-r8nh4\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.984871 master-0 kubenswrapper[28766]: I0318 09:08:02.984840 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a67829d2-585d-4140-aaa7-c7551bb714d3-audit-log\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.984972 master-0 kubenswrapper[28766]: I0318 09:08:02.984959 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-client-ca-bundle\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.986718 master-0 kubenswrapper[28766]: I0318 09:08:02.985457 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a67829d2-585d-4140-aaa7-c7551bb714d3-audit-log\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.986718 master-0 kubenswrapper[28766]: I0318 09:08:02.985619 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a67829d2-585d-4140-aaa7-c7551bb714d3-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.986718 master-0 kubenswrapper[28766]: I0318 09:08:02.986080 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a67829d2-585d-4140-aaa7-c7551bb714d3-metrics-server-audit-profiles\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.988604 master-0 kubenswrapper[28766]: I0318 09:08:02.988549 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-secret-metrics-server-tls\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.991343 master-0 kubenswrapper[28766]: I0318 09:08:02.991318 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-client-ca-bundle\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:02.991631 master-0 kubenswrapper[28766]: I0318 09:08:02.991607 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a67829d2-585d-4140-aaa7-c7551bb714d3-secret-metrics-client-certs\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:03.001516 master-0 kubenswrapper[28766]: I0318 09:08:03.001456 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8nh4\" (UniqueName: \"kubernetes.io/projected/a67829d2-585d-4140-aaa7-c7551bb714d3-kube-api-access-r8nh4\") pod \"metrics-server-547c985987-bff72\" (UID: \"a67829d2-585d-4140-aaa7-c7551bb714d3\") " pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:03.118387 master-0 kubenswrapper[28766]: I0318 09:08:03.118235 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:03.542189 master-0 kubenswrapper[28766]: I0318 09:08:03.542145 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-547c985987-bff72"] Mar 18 09:08:04.078568 master-0 kubenswrapper[28766]: W0318 09:08:04.078491 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda67829d2_585d_4140_aaa7_c7551bb714d3.slice/crio-c87b505805d22117ee3f7b549c13d032a415575feb93aa86e584961d8234cb71 WatchSource:0}: Error finding container c87b505805d22117ee3f7b549c13d032a415575feb93aa86e584961d8234cb71: Status 404 returned error can't find the container with id c87b505805d22117ee3f7b549c13d032a415575feb93aa86e584961d8234cb71 Mar 18 09:08:04.977067 master-0 kubenswrapper[28766]: I0318 09:08:04.976930 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" event={"ID":"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef","Type":"ContainerStarted","Data":"c95b590c455f8ffd9345fcf6c293dc7d5d67d25accfcb468230553e05a80a327"} Mar 18 09:08:04.977067 master-0 kubenswrapper[28766]: I0318 09:08:04.977003 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" event={"ID":"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef","Type":"ContainerStarted","Data":"b5daa5550d24f5813b6b6bf307b1658b1d014070e0b144e9292104b0e8350aa2"} Mar 18 09:08:04.977067 master-0 kubenswrapper[28766]: I0318 09:08:04.977024 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" event={"ID":"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef","Type":"ContainerStarted","Data":"6bfdf801db9b0ea34497ad86fcad672b87fcaa0e3c880b273ea652c01e7b289d"} Mar 18 09:08:04.978813 master-0 kubenswrapper[28766]: I0318 09:08:04.978773 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-547c985987-bff72" event={"ID":"a67829d2-585d-4140-aaa7-c7551bb714d3","Type":"ContainerStarted","Data":"735bf27f0d37ebb2e234f25b9c34ffbd9869f326c6fba10d8a2ca05b8900c23d"} Mar 18 09:08:04.978813 master-0 kubenswrapper[28766]: I0318 09:08:04.978808 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-547c985987-bff72" event={"ID":"a67829d2-585d-4140-aaa7-c7551bb714d3","Type":"ContainerStarted","Data":"c87b505805d22117ee3f7b549c13d032a415575feb93aa86e584961d8234cb71"} Mar 18 09:08:05.001566 master-0 kubenswrapper[28766]: I0318 09:08:05.001481 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-547c985987-bff72" podStartSLOduration=3.00146294 podStartE2EDuration="3.00146294s" podCreationTimestamp="2026-03-18 09:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:08:04.997418795 +0000 UTC m=+238.011677471" watchObservedRunningTime="2026-03-18 09:08:05.00146294 +0000 UTC m=+238.015721606" Mar 18 09:08:07.003659 master-0 kubenswrapper[28766]: I0318 09:08:07.003548 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" event={"ID":"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef","Type":"ContainerStarted","Data":"78897f5561bf43fa65d5f172a06fb41d5716148188b9e09609ec58fc5505ddc0"} Mar 18 09:08:07.003659 master-0 kubenswrapper[28766]: I0318 09:08:07.003651 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" event={"ID":"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef","Type":"ContainerStarted","Data":"8f7d71178bf11a7a6fa60051e2cc80f3c81e0e3c83f828cc2f0062c0b3b34c74"} Mar 18 09:08:07.003659 master-0 kubenswrapper[28766]: I0318 09:08:07.003686 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" event={"ID":"bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef","Type":"ContainerStarted","Data":"1de8d717272f4ade55974b8d71c0de691b9ed20176ed2db7896ab0328ca82ac7"} Mar 18 09:08:07.004739 master-0 kubenswrapper[28766]: I0318 09:08:07.004018 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:10.393994 master-0 kubenswrapper[28766]: I0318 09:08:10.393763 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" Mar 18 09:08:10.435580 master-0 kubenswrapper[28766]: I0318 09:08:10.435464 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-59d6555497-hckn8" podStartSLOduration=5.587604736 podStartE2EDuration="10.435444853s" podCreationTimestamp="2026-03-18 09:08:00 +0000 UTC" firstStartedPulling="2026-03-18 09:08:00.976777199 +0000 UTC m=+233.991035865" lastFinishedPulling="2026-03-18 09:08:05.824617316 +0000 UTC m=+238.838875982" observedRunningTime="2026-03-18 09:08:07.039502324 +0000 UTC m=+240.053761050" watchObservedRunningTime="2026-03-18 09:08:10.435444853 +0000 UTC m=+243.449703539" Mar 18 09:08:21.232844 master-0 kubenswrapper[28766]: I0318 09:08:21.232745 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:08:21.234202 master-0 kubenswrapper[28766]: E0318 09:08:21.233093 28766 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:08:21.234202 master-0 kubenswrapper[28766]: E0318 09:08:21.233259 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert podName:a00bebfb-2c54-4888-9200-a5b96420fd37 nodeName:}" failed. No retries permitted until 2026-03-18 09:09:25.233220392 +0000 UTC m=+318.247479098 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-476ck" (UID: "a00bebfb-2c54-4888-9200-a5b96420fd37") : secret "networking-console-plugin-cert" not found Mar 18 09:08:23.119762 master-0 kubenswrapper[28766]: I0318 09:08:23.119645 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:23.119762 master-0 kubenswrapper[28766]: I0318 09:08:23.119735 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:24.697509 master-0 kubenswrapper[28766]: I0318 09:08:24.697427 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 09:08:24.700396 master-0 kubenswrapper[28766]: I0318 09:08:24.700349 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.704327 master-0 kubenswrapper[28766]: I0318 09:08:24.704290 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 09:08:24.704502 master-0 kubenswrapper[28766]: I0318 09:08:24.704473 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 09:08:24.709420 master-0 kubenswrapper[28766]: I0318 09:08:24.706586 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 09:08:24.709420 master-0 kubenswrapper[28766]: I0318 09:08:24.707049 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-bqd97mbshi5i6" Mar 18 09:08:24.709420 master-0 kubenswrapper[28766]: I0318 09:08:24.707043 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 09:08:24.709420 master-0 kubenswrapper[28766]: I0318 09:08:24.707102 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 09:08:24.709420 master-0 kubenswrapper[28766]: I0318 09:08:24.707608 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 09:08:24.709420 master-0 kubenswrapper[28766]: I0318 09:08:24.708705 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 09:08:24.709420 master-0 kubenswrapper[28766]: I0318 09:08:24.708735 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 09:08:24.709420 master-0 kubenswrapper[28766]: I0318 09:08:24.708914 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 09:08:24.719251 master-0 kubenswrapper[28766]: I0318 09:08:24.718961 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 09:08:24.720973 master-0 kubenswrapper[28766]: I0318 09:08:24.720596 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 09:08:24.772425 master-0 kubenswrapper[28766]: I0318 09:08:24.769601 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.808751 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.808836 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-config\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.808889 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.808941 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.808972 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.808999 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809022 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809059 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809086 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b778f3f5-3686-49f7-aa43-93a9d9d2d963-config-out\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809116 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-web-config\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809145 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809185 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmjtw\" (UniqueName: \"kubernetes.io/projected/b778f3f5-3686-49f7-aa43-93a9d9d2d963-kube-api-access-xmjtw\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809212 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809236 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809257 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809285 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809315 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.810011 master-0 kubenswrapper[28766]: I0318 09:08:24.809338 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b778f3f5-3686-49f7-aa43-93a9d9d2d963-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.910807 master-0 kubenswrapper[28766]: I0318 09:08:24.910757 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.911111 master-0 kubenswrapper[28766]: I0318 09:08:24.911096 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.911236 master-0 kubenswrapper[28766]: I0318 09:08:24.911224 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.912171 master-0 kubenswrapper[28766]: I0318 09:08:24.912112 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.912333 master-0 kubenswrapper[28766]: I0318 09:08:24.912302 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.912562 master-0 kubenswrapper[28766]: I0318 09:08:24.912546 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.914006 master-0 kubenswrapper[28766]: I0318 09:08:24.913989 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.914139 master-0 kubenswrapper[28766]: I0318 09:08:24.914125 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b778f3f5-3686-49f7-aa43-93a9d9d2d963-config-out\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.914247 master-0 kubenswrapper[28766]: I0318 09:08:24.914234 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-web-config\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.914380 master-0 kubenswrapper[28766]: I0318 09:08:24.914367 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.916198 master-0 kubenswrapper[28766]: I0318 09:08:24.915819 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmjtw\" (UniqueName: \"kubernetes.io/projected/b778f3f5-3686-49f7-aa43-93a9d9d2d963-kube-api-access-xmjtw\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.916365 master-0 kubenswrapper[28766]: I0318 09:08:24.916348 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.925618 master-0 kubenswrapper[28766]: I0318 09:08:24.914484 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.926543 master-0 kubenswrapper[28766]: I0318 09:08:24.914778 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.926820 master-0 kubenswrapper[28766]: I0318 09:08:24.915027 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.926959 master-0 kubenswrapper[28766]: I0318 09:08:24.915743 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.927076 master-0 kubenswrapper[28766]: E0318 09:08:24.912500 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 09:08:24.927247 master-0 kubenswrapper[28766]: I0318 09:08:24.921562 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b778f3f5-3686-49f7-aa43-93a9d9d2d963-config-out\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.927325 master-0 kubenswrapper[28766]: I0318 09:08:24.921884 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-web-config\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.927531 master-0 kubenswrapper[28766]: I0318 09:08:24.925460 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.927608 master-0 kubenswrapper[28766]: I0318 09:08:24.913923 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.927608 master-0 kubenswrapper[28766]: I0318 09:08:24.925907 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.927608 master-0 kubenswrapper[28766]: E0318 09:08:24.927409 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls podName:b778f3f5-3686-49f7-aa43-93a9d9d2d963 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:25.427196297 +0000 UTC m=+258.441454963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b778f3f5-3686-49f7-aa43-93a9d9d2d963") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 09:08:24.927765 master-0 kubenswrapper[28766]: I0318 09:08:24.927619 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.927765 master-0 kubenswrapper[28766]: I0318 09:08:24.927679 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.928086 master-0 kubenswrapper[28766]: I0318 09:08:24.928052 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.928086 master-0 kubenswrapper[28766]: I0318 09:08:24.928082 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b778f3f5-3686-49f7-aa43-93a9d9d2d963-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.928195 master-0 kubenswrapper[28766]: I0318 09:08:24.928110 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.928195 master-0 kubenswrapper[28766]: I0318 09:08:24.928166 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-config\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.928312 master-0 kubenswrapper[28766]: E0318 09:08:24.928296 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 09:08:24.928436 master-0 kubenswrapper[28766]: E0318 09:08:24.928424 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls podName:b778f3f5-3686-49f7-aa43-93a9d9d2d963 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:25.428408728 +0000 UTC m=+258.442667394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b778f3f5-3686-49f7-aa43-93a9d9d2d963") : secret "prometheus-k8s-tls" not found Mar 18 09:08:24.931214 master-0 kubenswrapper[28766]: I0318 09:08:24.929264 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b778f3f5-3686-49f7-aa43-93a9d9d2d963-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.931214 master-0 kubenswrapper[28766]: I0318 09:08:24.930761 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-config\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.932153 master-0 kubenswrapper[28766]: I0318 09:08:24.932127 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.932235 master-0 kubenswrapper[28766]: I0318 09:08:24.932173 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b778f3f5-3686-49f7-aa43-93a9d9d2d963-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.932744 master-0 kubenswrapper[28766]: I0318 09:08:24.932724 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.935759 master-0 kubenswrapper[28766]: I0318 09:08:24.935730 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:24.935973 master-0 kubenswrapper[28766]: I0318 09:08:24.935955 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmjtw\" (UniqueName: \"kubernetes.io/projected/b778f3f5-3686-49f7-aa43-93a9d9d2d963-kube-api-access-xmjtw\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:25.435614 master-0 kubenswrapper[28766]: I0318 09:08:25.435531 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:25.435921 master-0 kubenswrapper[28766]: I0318 09:08:25.435715 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:25.435988 master-0 kubenswrapper[28766]: E0318 09:08:25.435946 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 09:08:25.436033 master-0 kubenswrapper[28766]: E0318 09:08:25.436024 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls podName:b778f3f5-3686-49f7-aa43-93a9d9d2d963 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:26.436000127 +0000 UTC m=+259.450258833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b778f3f5-3686-49f7-aa43-93a9d9d2d963") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 09:08:25.436214 master-0 kubenswrapper[28766]: E0318 09:08:25.436194 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 09:08:25.436377 master-0 kubenswrapper[28766]: E0318 09:08:25.436361 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls podName:b778f3f5-3686-49f7-aa43-93a9d9d2d963 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:26.436308505 +0000 UTC m=+259.450567171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b778f3f5-3686-49f7-aa43-93a9d9d2d963") : secret "prometheus-k8s-tls" not found Mar 18 09:08:26.452644 master-0 kubenswrapper[28766]: I0318 09:08:26.452563 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:26.453334 master-0 kubenswrapper[28766]: I0318 09:08:26.452724 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:26.453334 master-0 kubenswrapper[28766]: E0318 09:08:26.452960 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 09:08:26.453334 master-0 kubenswrapper[28766]: E0318 09:08:26.452965 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 09:08:26.453334 master-0 kubenswrapper[28766]: E0318 09:08:26.453048 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls podName:b778f3f5-3686-49f7-aa43-93a9d9d2d963 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:28.453024262 +0000 UTC m=+261.467282938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b778f3f5-3686-49f7-aa43-93a9d9d2d963") : secret "prometheus-k8s-tls" not found Mar 18 09:08:26.453334 master-0 kubenswrapper[28766]: E0318 09:08:26.453141 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls podName:b778f3f5-3686-49f7-aa43-93a9d9d2d963 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:28.453089324 +0000 UTC m=+261.467348030 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b778f3f5-3686-49f7-aa43-93a9d9d2d963") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 09:08:28.534328 master-0 kubenswrapper[28766]: I0318 09:08:28.534241 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:28.535285 master-0 kubenswrapper[28766]: I0318 09:08:28.534417 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:28.535285 master-0 kubenswrapper[28766]: E0318 09:08:28.534487 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 09:08:28.535285 master-0 kubenswrapper[28766]: E0318 09:08:28.534627 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls podName:b778f3f5-3686-49f7-aa43-93a9d9d2d963 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:32.534598639 +0000 UTC m=+265.548857305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b778f3f5-3686-49f7-aa43-93a9d9d2d963") : secret "prometheus-k8s-tls" not found Mar 18 09:08:28.535285 master-0 kubenswrapper[28766]: E0318 09:08:28.534676 28766 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 09:08:28.535285 master-0 kubenswrapper[28766]: E0318 09:08:28.534752 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls podName:b778f3f5-3686-49f7-aa43-93a9d9d2d963 nodeName:}" failed. No retries permitted until 2026-03-18 09:08:32.534731332 +0000 UTC m=+265.548990008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b778f3f5-3686-49f7-aa43-93a9d9d2d963") : secret "prometheus-k8s-thanos-sidecar-tls" not found Mar 18 09:08:29.625924 master-0 kubenswrapper[28766]: I0318 09:08:29.625801 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 09:08:29.629679 master-0 kubenswrapper[28766]: I0318 09:08:29.629618 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.635268 master-0 kubenswrapper[28766]: I0318 09:08:29.635198 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 09:08:29.635268 master-0 kubenswrapper[28766]: I0318 09:08:29.635208 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 09:08:29.635784 master-0 kubenswrapper[28766]: I0318 09:08:29.635208 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 09:08:29.635784 master-0 kubenswrapper[28766]: I0318 09:08:29.635379 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 09:08:29.635784 master-0 kubenswrapper[28766]: I0318 09:08:29.635270 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 09:08:29.638275 master-0 kubenswrapper[28766]: I0318 09:08:29.638241 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 09:08:29.638718 master-0 kubenswrapper[28766]: I0318 09:08:29.638691 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 09:08:29.644331 master-0 kubenswrapper[28766]: I0318 09:08:29.644284 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 09:08:29.654912 master-0 kubenswrapper[28766]: I0318 09:08:29.654795 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655160 master-0 kubenswrapper[28766]: I0318 09:08:29.654935 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/aaac568a-d210-428c-aef8-a9615d21e86e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655160 master-0 kubenswrapper[28766]: I0318 09:08:29.655030 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaac568a-d210-428c-aef8-a9615d21e86e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655160 master-0 kubenswrapper[28766]: I0318 09:08:29.655135 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655315 master-0 kubenswrapper[28766]: I0318 09:08:29.655177 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655315 master-0 kubenswrapper[28766]: I0318 09:08:29.655258 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aaac568a-d210-428c-aef8-a9615d21e86e-config-out\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655409 master-0 kubenswrapper[28766]: I0318 09:08:29.655323 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655409 master-0 kubenswrapper[28766]: I0318 09:08:29.655380 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-web-config\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655551 master-0 kubenswrapper[28766]: I0318 09:08:29.655509 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aaac568a-d210-428c-aef8-a9615d21e86e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.655841 master-0 kubenswrapper[28766]: I0318 09:08:29.655788 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-config-volume\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.656388 master-0 kubenswrapper[28766]: I0318 09:08:29.655900 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aaac568a-d210-428c-aef8-a9615d21e86e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.656388 master-0 kubenswrapper[28766]: I0318 09:08:29.655980 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzpt\" (UniqueName: \"kubernetes.io/projected/aaac568a-d210-428c-aef8-a9615d21e86e-kube-api-access-9lzpt\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.664618 master-0 kubenswrapper[28766]: I0318 09:08:29.664542 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 09:08:29.760512 master-0 kubenswrapper[28766]: I0318 09:08:29.759146 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aaac568a-d210-428c-aef8-a9615d21e86e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.760512 master-0 kubenswrapper[28766]: I0318 09:08:29.760120 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aaac568a-d210-428c-aef8-a9615d21e86e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.760512 master-0 kubenswrapper[28766]: I0318 09:08:29.760266 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-config-volume\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.760512 master-0 kubenswrapper[28766]: I0318 09:08:29.760287 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aaac568a-d210-428c-aef8-a9615d21e86e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.760878 master-0 kubenswrapper[28766]: I0318 09:08:29.760706 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lzpt\" (UniqueName: \"kubernetes.io/projected/aaac568a-d210-428c-aef8-a9615d21e86e-kube-api-access-9lzpt\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.760878 master-0 kubenswrapper[28766]: I0318 09:08:29.760744 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.760878 master-0 kubenswrapper[28766]: I0318 09:08:29.760767 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/aaac568a-d210-428c-aef8-a9615d21e86e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.760878 master-0 kubenswrapper[28766]: I0318 09:08:29.760801 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaac568a-d210-428c-aef8-a9615d21e86e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.760878 master-0 kubenswrapper[28766]: I0318 09:08:29.760823 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.761044 master-0 kubenswrapper[28766]: I0318 09:08:29.760842 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aaac568a-d210-428c-aef8-a9615d21e86e-config-out\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.761044 master-0 kubenswrapper[28766]: I0318 09:08:29.760934 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.761044 master-0 kubenswrapper[28766]: I0318 09:08:29.760984 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.761044 master-0 kubenswrapper[28766]: I0318 09:08:29.761005 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-web-config\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.762097 master-0 kubenswrapper[28766]: I0318 09:08:29.762045 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaac568a-d210-428c-aef8-a9615d21e86e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.764478 master-0 kubenswrapper[28766]: I0318 09:08:29.764134 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-config-volume\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.764478 master-0 kubenswrapper[28766]: E0318 09:08:29.764222 28766 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 09:08:29.764478 master-0 kubenswrapper[28766]: E0318 09:08:29.764278 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-main-tls podName:aaac568a-d210-428c-aef8-a9615d21e86e nodeName:}" failed. No retries permitted until 2026-03-18 09:08:30.264261708 +0000 UTC m=+263.278520384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "aaac568a-d210-428c-aef8-a9615d21e86e") : secret "alertmanager-main-tls" not found Mar 18 09:08:29.766387 master-0 kubenswrapper[28766]: I0318 09:08:29.766348 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/aaac568a-d210-428c-aef8-a9615d21e86e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.766534 master-0 kubenswrapper[28766]: I0318 09:08:29.766505 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.767383 master-0 kubenswrapper[28766]: I0318 09:08:29.766975 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.767383 master-0 kubenswrapper[28766]: I0318 09:08:29.766989 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aaac568a-d210-428c-aef8-a9615d21e86e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.769808 master-0 kubenswrapper[28766]: I0318 09:08:29.768486 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.769808 master-0 kubenswrapper[28766]: I0318 09:08:29.769051 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-web-config\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.770877 master-0 kubenswrapper[28766]: I0318 09:08:29.770828 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aaac568a-d210-428c-aef8-a9615d21e86e-config-out\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:29.792875 master-0 kubenswrapper[28766]: I0318 09:08:29.791016 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lzpt\" (UniqueName: \"kubernetes.io/projected/aaac568a-d210-428c-aef8-a9615d21e86e-kube-api-access-9lzpt\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:30.273742 master-0 kubenswrapper[28766]: I0318 09:08:30.272188 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:30.281875 master-0 kubenswrapper[28766]: I0318 09:08:30.277227 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/aaac568a-d210-428c-aef8-a9615d21e86e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"aaac568a-d210-428c-aef8-a9615d21e86e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:30.553117 master-0 kubenswrapper[28766]: I0318 09:08:30.552916 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:31.029502 master-0 kubenswrapper[28766]: I0318 09:08:31.029424 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 09:08:31.240242 master-0 kubenswrapper[28766]: I0318 09:08:31.240170 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerStarted","Data":"ae7e678fa52204ca33574d289dbc030257c95ee8f9b9b16971188a9fe8188448"} Mar 18 09:08:31.240242 master-0 kubenswrapper[28766]: I0318 09:08:31.240237 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerStarted","Data":"330e682070e592ba18eaa2b0c0f5066a45a299a3b1f0566354ba7b7187fc0ff6"} Mar 18 09:08:32.249090 master-0 kubenswrapper[28766]: I0318 09:08:32.249010 28766 generic.go:334] "Generic (PLEG): container finished" podID="aaac568a-d210-428c-aef8-a9615d21e86e" containerID="ae7e678fa52204ca33574d289dbc030257c95ee8f9b9b16971188a9fe8188448" exitCode=0 Mar 18 09:08:32.250260 master-0 kubenswrapper[28766]: I0318 09:08:32.249076 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerDied","Data":"ae7e678fa52204ca33574d289dbc030257c95ee8f9b9b16971188a9fe8188448"} Mar 18 09:08:32.615571 master-0 kubenswrapper[28766]: I0318 09:08:32.615264 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:32.615571 master-0 kubenswrapper[28766]: I0318 09:08:32.615396 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:32.624933 master-0 kubenswrapper[28766]: I0318 09:08:32.619914 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:32.624933 master-0 kubenswrapper[28766]: I0318 09:08:32.619964 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b778f3f5-3686-49f7-aa43-93a9d9d2d963-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b778f3f5-3686-49f7-aa43-93a9d9d2d963\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:32.826311 master-0 kubenswrapper[28766]: I0318 09:08:32.826195 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:33.289929 master-0 kubenswrapper[28766]: I0318 09:08:33.289781 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 09:08:33.300362 master-0 kubenswrapper[28766]: W0318 09:08:33.300268 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb778f3f5_3686_49f7_aa43_93a9d9d2d963.slice/crio-57fa2a636fb67eb9ab9df654b1e9e5c8b9eb00cb935c242a8ec84342e57881d2 WatchSource:0}: Error finding container 57fa2a636fb67eb9ab9df654b1e9e5c8b9eb00cb935c242a8ec84342e57881d2: Status 404 returned error can't find the container with id 57fa2a636fb67eb9ab9df654b1e9e5c8b9eb00cb935c242a8ec84342e57881d2 Mar 18 09:08:33.662200 master-0 kubenswrapper[28766]: I0318 09:08:33.661516 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d98475dc4-pxrzb"] Mar 18 09:08:33.670325 master-0 kubenswrapper[28766]: I0318 09:08:33.663934 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.675359 master-0 kubenswrapper[28766]: I0318 09:08:33.674430 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d98475dc4-pxrzb"] Mar 18 09:08:33.748896 master-0 kubenswrapper[28766]: I0318 09:08:33.748811 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-config\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.749231 master-0 kubenswrapper[28766]: I0318 09:08:33.749158 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-oauth-serving-cert\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.749401 master-0 kubenswrapper[28766]: I0318 09:08:33.749372 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-serving-cert\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.749582 master-0 kubenswrapper[28766]: I0318 09:08:33.749556 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-trusted-ca-bundle\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.749890 master-0 kubenswrapper[28766]: I0318 09:08:33.749806 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-oauth-config\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.749946 master-0 kubenswrapper[28766]: I0318 09:08:33.749899 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-927j7\" (UniqueName: \"kubernetes.io/projected/2cc1dd11-2b02-4e44-87da-192703ee51c4-kube-api-access-927j7\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.749988 master-0 kubenswrapper[28766]: I0318 09:08:33.749967 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-service-ca\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.853146 master-0 kubenswrapper[28766]: I0318 09:08:33.853004 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-serving-cert\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.853146 master-0 kubenswrapper[28766]: I0318 09:08:33.853104 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-trusted-ca-bundle\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.853795 master-0 kubenswrapper[28766]: I0318 09:08:33.853263 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-oauth-config\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.853795 master-0 kubenswrapper[28766]: I0318 09:08:33.853376 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-927j7\" (UniqueName: \"kubernetes.io/projected/2cc1dd11-2b02-4e44-87da-192703ee51c4-kube-api-access-927j7\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.853795 master-0 kubenswrapper[28766]: I0318 09:08:33.853416 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-service-ca\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.853795 master-0 kubenswrapper[28766]: I0318 09:08:33.853468 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-config\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.853795 master-0 kubenswrapper[28766]: I0318 09:08:33.853512 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-oauth-serving-cert\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.854505 master-0 kubenswrapper[28766]: I0318 09:08:33.854479 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-oauth-serving-cert\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.855354 master-0 kubenswrapper[28766]: I0318 09:08:33.855325 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-trusted-ca-bundle\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.856477 master-0 kubenswrapper[28766]: I0318 09:08:33.856436 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-service-ca\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.857294 master-0 kubenswrapper[28766]: I0318 09:08:33.856868 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-config\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.858704 master-0 kubenswrapper[28766]: I0318 09:08:33.858643 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-serving-cert\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.863670 master-0 kubenswrapper[28766]: I0318 09:08:33.863617 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-oauth-config\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:33.871687 master-0 kubenswrapper[28766]: I0318 09:08:33.871625 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-927j7\" (UniqueName: \"kubernetes.io/projected/2cc1dd11-2b02-4e44-87da-192703ee51c4-kube-api-access-927j7\") pod \"console-5d98475dc4-pxrzb\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:34.054730 master-0 kubenswrapper[28766]: I0318 09:08:34.054115 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:34.266908 master-0 kubenswrapper[28766]: I0318 09:08:34.266819 28766 generic.go:334] "Generic (PLEG): container finished" podID="b778f3f5-3686-49f7-aa43-93a9d9d2d963" containerID="6b7b0ae3b5c360b6a44bf9a29d25674305cfc1f63a0e601ee5f5b50ebddaec44" exitCode=0 Mar 18 09:08:34.266908 master-0 kubenswrapper[28766]: I0318 09:08:34.266903 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b778f3f5-3686-49f7-aa43-93a9d9d2d963","Type":"ContainerDied","Data":"6b7b0ae3b5c360b6a44bf9a29d25674305cfc1f63a0e601ee5f5b50ebddaec44"} Mar 18 09:08:34.267266 master-0 kubenswrapper[28766]: I0318 09:08:34.266939 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b778f3f5-3686-49f7-aa43-93a9d9d2d963","Type":"ContainerStarted","Data":"57fa2a636fb67eb9ab9df654b1e9e5c8b9eb00cb935c242a8ec84342e57881d2"} Mar 18 09:08:34.478424 master-0 kubenswrapper[28766]: I0318 09:08:34.478351 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d98475dc4-pxrzb"] Mar 18 09:08:34.481282 master-0 kubenswrapper[28766]: W0318 09:08:34.481217 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cc1dd11_2b02_4e44_87da_192703ee51c4.slice/crio-358cdafd2820538c97eda87b1d23d0f6633403e241ca630f4a6d9d80ab6a5ec3 WatchSource:0}: Error finding container 358cdafd2820538c97eda87b1d23d0f6633403e241ca630f4a6d9d80ab6a5ec3: Status 404 returned error can't find the container with id 358cdafd2820538c97eda87b1d23d0f6633403e241ca630f4a6d9d80ab6a5ec3 Mar 18 09:08:35.278070 master-0 kubenswrapper[28766]: I0318 09:08:35.277977 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d98475dc4-pxrzb" event={"ID":"2cc1dd11-2b02-4e44-87da-192703ee51c4","Type":"ContainerStarted","Data":"d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889"} Mar 18 09:08:35.278070 master-0 kubenswrapper[28766]: I0318 09:08:35.278034 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d98475dc4-pxrzb" event={"ID":"2cc1dd11-2b02-4e44-87da-192703ee51c4","Type":"ContainerStarted","Data":"358cdafd2820538c97eda87b1d23d0f6633403e241ca630f4a6d9d80ab6a5ec3"} Mar 18 09:08:35.317511 master-0 kubenswrapper[28766]: I0318 09:08:35.317299 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d98475dc4-pxrzb" podStartSLOduration=2.316061598 podStartE2EDuration="2.316061598s" podCreationTimestamp="2026-03-18 09:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:08:35.305048714 +0000 UTC m=+268.319307400" watchObservedRunningTime="2026-03-18 09:08:35.316061598 +0000 UTC m=+268.330320264" Mar 18 09:08:36.295600 master-0 kubenswrapper[28766]: I0318 09:08:36.295520 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerStarted","Data":"c42fb7422ac7ad98219dbe19310031e8c979d82e35f951abd1c4b93a217df09e"} Mar 18 09:08:36.295600 master-0 kubenswrapper[28766]: I0318 09:08:36.295599 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerStarted","Data":"da1a18f43e66d0ebf0f1ce4e0b77bacc67fe859fd2b9ad83bd0ac2ada77e908c"} Mar 18 09:08:36.295600 master-0 kubenswrapper[28766]: I0318 09:08:36.295611 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerStarted","Data":"5cec363657ed2d6f88b974c49344a465fd6c04f6f6c15d5cff948fe9ab0caf8a"} Mar 18 09:08:36.296267 master-0 kubenswrapper[28766]: I0318 09:08:36.295621 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerStarted","Data":"32063954c592db54349b392d8d3b9affd01ba0d2f0b97fe5ae3c5b7f910dd23b"} Mar 18 09:08:37.310881 master-0 kubenswrapper[28766]: I0318 09:08:37.309158 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerStarted","Data":"607148cf2f31b4d2b289ff14f777faa62034b1ad20a07c4e5dbd21a6b3314223"} Mar 18 09:08:39.328882 master-0 kubenswrapper[28766]: I0318 09:08:39.328734 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"aaac568a-d210-428c-aef8-a9615d21e86e","Type":"ContainerStarted","Data":"67df0dae9204f1c92f10707c865f5fa3e6594184dec9fdac591e4d2601d2d757"} Mar 18 09:08:39.338726 master-0 kubenswrapper[28766]: I0318 09:08:39.338673 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b778f3f5-3686-49f7-aa43-93a9d9d2d963","Type":"ContainerStarted","Data":"e3f0c7c55f5cf234845fb8bef39e8480726b507526c081510ee762d462617580"} Mar 18 09:08:39.338726 master-0 kubenswrapper[28766]: I0318 09:08:39.338721 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b778f3f5-3686-49f7-aa43-93a9d9d2d963","Type":"ContainerStarted","Data":"a98430e91043f82570ca618ed3e3ad2b3020f5254f4e72348a938b9fb8bd7ad9"} Mar 18 09:08:39.338918 master-0 kubenswrapper[28766]: I0318 09:08:39.338733 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b778f3f5-3686-49f7-aa43-93a9d9d2d963","Type":"ContainerStarted","Data":"7ff38804300953e3ef4f5d6f91356986cd39b60c3e3f568d285c624a78fd1091"} Mar 18 09:08:39.338918 master-0 kubenswrapper[28766]: I0318 09:08:39.338742 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b778f3f5-3686-49f7-aa43-93a9d9d2d963","Type":"ContainerStarted","Data":"ee6f8d5df46611ef542ab38b996fde68970f5dfe619be828f7dad93256d16dca"} Mar 18 09:08:39.338918 master-0 kubenswrapper[28766]: I0318 09:08:39.338751 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b778f3f5-3686-49f7-aa43-93a9d9d2d963","Type":"ContainerStarted","Data":"957a895ff62da893a4c1a03bf354a1cefafd54f4676e5b7de0e164a123d2bb13"} Mar 18 09:08:39.338918 master-0 kubenswrapper[28766]: I0318 09:08:39.338761 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b778f3f5-3686-49f7-aa43-93a9d9d2d963","Type":"ContainerStarted","Data":"55b8e0d4aba817530245bc01f92b473eda99a78832130f22743eba9fd5d6de22"} Mar 18 09:08:39.368526 master-0 kubenswrapper[28766]: I0318 09:08:39.368282 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=7.096652167 podStartE2EDuration="10.36825849s" podCreationTimestamp="2026-03-18 09:08:29 +0000 UTC" firstStartedPulling="2026-03-18 09:08:32.252072739 +0000 UTC m=+265.266331445" lastFinishedPulling="2026-03-18 09:08:35.523679102 +0000 UTC m=+268.537937768" observedRunningTime="2026-03-18 09:08:39.365338164 +0000 UTC m=+272.379596830" watchObservedRunningTime="2026-03-18 09:08:39.36825849 +0000 UTC m=+272.382517166" Mar 18 09:08:39.439483 master-0 kubenswrapper[28766]: I0318 09:08:39.439353 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=11.415487412 podStartE2EDuration="15.439312912s" podCreationTimestamp="2026-03-18 09:08:24 +0000 UTC" firstStartedPulling="2026-03-18 09:08:34.269932692 +0000 UTC m=+267.284191358" lastFinishedPulling="2026-03-18 09:08:38.293758192 +0000 UTC m=+271.308016858" observedRunningTime="2026-03-18 09:08:39.430357251 +0000 UTC m=+272.444615907" watchObservedRunningTime="2026-03-18 09:08:39.439312912 +0000 UTC m=+272.453571598" Mar 18 09:08:42.074732 master-0 kubenswrapper[28766]: I0318 09:08:42.074670 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gfsvj"] Mar 18 09:08:42.075693 master-0 kubenswrapper[28766]: I0318 09:08:42.075664 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.084056 master-0 kubenswrapper[28766]: I0318 09:08:42.084007 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-f24rw" Mar 18 09:08:42.085704 master-0 kubenswrapper[28766]: I0318 09:08:42.085652 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 09:08:42.150234 master-0 kubenswrapper[28766]: I0318 09:08:42.150159 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwr5z\" (UniqueName: \"kubernetes.io/projected/5c921938-2ae3-4b48-838b-14822da65961-kube-api-access-cwr5z\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.150463 master-0 kubenswrapper[28766]: I0318 09:08:42.150257 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c921938-2ae3-4b48-838b-14822da65961-host\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.150641 master-0 kubenswrapper[28766]: I0318 09:08:42.150560 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c921938-2ae3-4b48-838b-14822da65961-serviceca\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.252427 master-0 kubenswrapper[28766]: I0318 09:08:42.252310 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwr5z\" (UniqueName: \"kubernetes.io/projected/5c921938-2ae3-4b48-838b-14822da65961-kube-api-access-cwr5z\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.252721 master-0 kubenswrapper[28766]: I0318 09:08:42.252477 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c921938-2ae3-4b48-838b-14822da65961-host\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.252721 master-0 kubenswrapper[28766]: I0318 09:08:42.252544 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c921938-2ae3-4b48-838b-14822da65961-serviceca\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.252917 master-0 kubenswrapper[28766]: I0318 09:08:42.252801 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c921938-2ae3-4b48-838b-14822da65961-host\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.253590 master-0 kubenswrapper[28766]: I0318 09:08:42.253514 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c921938-2ae3-4b48-838b-14822da65961-serviceca\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.276885 master-0 kubenswrapper[28766]: I0318 09:08:42.276797 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwr5z\" (UniqueName: \"kubernetes.io/projected/5c921938-2ae3-4b48-838b-14822da65961-kube-api-access-cwr5z\") pod \"node-ca-gfsvj\" (UID: \"5c921938-2ae3-4b48-838b-14822da65961\") " pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.414901 master-0 kubenswrapper[28766]: I0318 09:08:42.414669 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gfsvj" Mar 18 09:08:42.827259 master-0 kubenswrapper[28766]: I0318 09:08:42.827180 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:43.129503 master-0 kubenswrapper[28766]: I0318 09:08:43.129369 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:43.134662 master-0 kubenswrapper[28766]: I0318 09:08:43.134621 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-547c985987-bff72" Mar 18 09:08:43.370986 master-0 kubenswrapper[28766]: I0318 09:08:43.370921 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gfsvj" event={"ID":"5c921938-2ae3-4b48-838b-14822da65961","Type":"ContainerStarted","Data":"0e1c0e7b3b16ff79cb3e06a4839c0e8d64bf94d092e6a6134f2fb32536d60efb"} Mar 18 09:08:44.055241 master-0 kubenswrapper[28766]: I0318 09:08:44.055127 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:44.055820 master-0 kubenswrapper[28766]: I0318 09:08:44.055725 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:44.063529 master-0 kubenswrapper[28766]: I0318 09:08:44.063425 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:44.415972 master-0 kubenswrapper[28766]: I0318 09:08:44.413710 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:08:44.645519 master-0 kubenswrapper[28766]: I0318 09:08:44.645391 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5644577ff9-fncm4"] Mar 18 09:08:46.418053 master-0 kubenswrapper[28766]: I0318 09:08:46.417968 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gfsvj" event={"ID":"5c921938-2ae3-4b48-838b-14822da65961","Type":"ContainerStarted","Data":"0c53f87b8a9a0fc9c605f100b595079f139a683eb1a1b657b81b446b9cafc465"} Mar 18 09:08:46.444619 master-0 kubenswrapper[28766]: I0318 09:08:46.444198 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gfsvj" podStartSLOduration=1.730945788 podStartE2EDuration="4.444181062s" podCreationTimestamp="2026-03-18 09:08:42 +0000 UTC" firstStartedPulling="2026-03-18 09:08:42.457957232 +0000 UTC m=+275.472215928" lastFinishedPulling="2026-03-18 09:08:45.171192536 +0000 UTC m=+278.185451202" observedRunningTime="2026-03-18 09:08:46.442905249 +0000 UTC m=+279.457163905" watchObservedRunningTime="2026-03-18 09:08:46.444181062 +0000 UTC m=+279.458439728" Mar 18 09:09:07.239247 master-0 kubenswrapper[28766]: I0318 09:09:07.239198 28766 kubelet.go:1505] "Image garbage collection succeeded" Mar 18 09:09:07.584960 master-0 kubenswrapper[28766]: I0318 09:09:07.584793 28766 scope.go:117] "RemoveContainer" containerID="83c47aaabc2b561d44e630d0889d72720d976ad68c17142beae85f320c2926a1" Mar 18 09:09:09.701526 master-0 kubenswrapper[28766]: I0318 09:09:09.701410 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5644577ff9-fncm4" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" containerID="cri-o://2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f" gracePeriod=15 Mar 18 09:09:10.119641 master-0 kubenswrapper[28766]: I0318 09:09:10.119581 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5644577ff9-fncm4_adbe8207-26d0-4d0e-aacc-5f321184b53c/console/0.log" Mar 18 09:09:10.119973 master-0 kubenswrapper[28766]: I0318 09:09:10.119690 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.220798 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-serving-cert\") pod \"adbe8207-26d0-4d0e-aacc-5f321184b53c\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.220919 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-service-ca\") pod \"adbe8207-26d0-4d0e-aacc-5f321184b53c\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.221007 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-config\") pod \"adbe8207-26d0-4d0e-aacc-5f321184b53c\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.221093 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-oauth-serving-cert\") pod \"adbe8207-26d0-4d0e-aacc-5f321184b53c\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.221137 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-oauth-config\") pod \"adbe8207-26d0-4d0e-aacc-5f321184b53c\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.221163 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tnwj\" (UniqueName: \"kubernetes.io/projected/adbe8207-26d0-4d0e-aacc-5f321184b53c-kube-api-access-5tnwj\") pod \"adbe8207-26d0-4d0e-aacc-5f321184b53c\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.221208 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-trusted-ca-bundle\") pod \"adbe8207-26d0-4d0e-aacc-5f321184b53c\" (UID: \"adbe8207-26d0-4d0e-aacc-5f321184b53c\") " Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.221948 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "adbe8207-26d0-4d0e-aacc-5f321184b53c" (UID: "adbe8207-26d0-4d0e-aacc-5f321184b53c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.222326 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "adbe8207-26d0-4d0e-aacc-5f321184b53c" (UID: "adbe8207-26d0-4d0e-aacc-5f321184b53c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:09:10.223701 master-0 kubenswrapper[28766]: I0318 09:09:10.222954 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-service-ca" (OuterVolumeSpecName: "service-ca") pod "adbe8207-26d0-4d0e-aacc-5f321184b53c" (UID: "adbe8207-26d0-4d0e-aacc-5f321184b53c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:09:10.228056 master-0 kubenswrapper[28766]: I0318 09:09:10.224657 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-config" (OuterVolumeSpecName: "console-config") pod "adbe8207-26d0-4d0e-aacc-5f321184b53c" (UID: "adbe8207-26d0-4d0e-aacc-5f321184b53c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:09:10.228056 master-0 kubenswrapper[28766]: I0318 09:09:10.226258 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "adbe8207-26d0-4d0e-aacc-5f321184b53c" (UID: "adbe8207-26d0-4d0e-aacc-5f321184b53c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:09:10.228056 master-0 kubenswrapper[28766]: I0318 09:09:10.226923 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adbe8207-26d0-4d0e-aacc-5f321184b53c-kube-api-access-5tnwj" (OuterVolumeSpecName: "kube-api-access-5tnwj") pod "adbe8207-26d0-4d0e-aacc-5f321184b53c" (UID: "adbe8207-26d0-4d0e-aacc-5f321184b53c"). InnerVolumeSpecName "kube-api-access-5tnwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:09:10.229144 master-0 kubenswrapper[28766]: I0318 09:09:10.229103 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "adbe8207-26d0-4d0e-aacc-5f321184b53c" (UID: "adbe8207-26d0-4d0e-aacc-5f321184b53c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:09:10.324214 master-0 kubenswrapper[28766]: I0318 09:09:10.323365 28766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:10.324214 master-0 kubenswrapper[28766]: I0318 09:09:10.323422 28766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:10.324214 master-0 kubenswrapper[28766]: I0318 09:09:10.323435 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tnwj\" (UniqueName: \"kubernetes.io/projected/adbe8207-26d0-4d0e-aacc-5f321184b53c-kube-api-access-5tnwj\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:10.324214 master-0 kubenswrapper[28766]: I0318 09:09:10.323450 28766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:10.324214 master-0 kubenswrapper[28766]: I0318 09:09:10.323463 28766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:10.324214 master-0 kubenswrapper[28766]: I0318 09:09:10.323476 28766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:10.324214 master-0 kubenswrapper[28766]: I0318 09:09:10.323489 28766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adbe8207-26d0-4d0e-aacc-5f321184b53c-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:10.643371 master-0 kubenswrapper[28766]: I0318 09:09:10.643213 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5644577ff9-fncm4_adbe8207-26d0-4d0e-aacc-5f321184b53c/console/0.log" Mar 18 09:09:10.643371 master-0 kubenswrapper[28766]: I0318 09:09:10.643291 28766 generic.go:334] "Generic (PLEG): container finished" podID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerID="2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f" exitCode=2 Mar 18 09:09:10.643371 master-0 kubenswrapper[28766]: I0318 09:09:10.643336 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5644577ff9-fncm4" event={"ID":"adbe8207-26d0-4d0e-aacc-5f321184b53c","Type":"ContainerDied","Data":"2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f"} Mar 18 09:09:10.643371 master-0 kubenswrapper[28766]: I0318 09:09:10.643374 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5644577ff9-fncm4" event={"ID":"adbe8207-26d0-4d0e-aacc-5f321184b53c","Type":"ContainerDied","Data":"407299c71bfa0c2dff8fce0278ae24c5100c1a00f719164511f4e8e190eaf411"} Mar 18 09:09:10.643703 master-0 kubenswrapper[28766]: I0318 09:09:10.643406 28766 scope.go:117] "RemoveContainer" containerID="2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f" Mar 18 09:09:10.643703 master-0 kubenswrapper[28766]: I0318 09:09:10.643552 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5644577ff9-fncm4" Mar 18 09:09:10.665714 master-0 kubenswrapper[28766]: I0318 09:09:10.665534 28766 scope.go:117] "RemoveContainer" containerID="2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f" Mar 18 09:09:10.666829 master-0 kubenswrapper[28766]: E0318 09:09:10.666537 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f\": container with ID starting with 2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f not found: ID does not exist" containerID="2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f" Mar 18 09:09:10.666942 master-0 kubenswrapper[28766]: I0318 09:09:10.666880 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f"} err="failed to get container status \"2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f\": rpc error: code = NotFound desc = could not find container \"2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f\": container with ID starting with 2bbf620e4665e793bc12f34d68a29d950e95e05fc4cd94607222bfe45d55886f not found: ID does not exist" Mar 18 09:09:10.685974 master-0 kubenswrapper[28766]: I0318 09:09:10.685886 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5644577ff9-fncm4"] Mar 18 09:09:10.691762 master-0 kubenswrapper[28766]: I0318 09:09:10.691707 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5644577ff9-fncm4"] Mar 18 09:09:11.242432 master-0 kubenswrapper[28766]: I0318 09:09:11.242369 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" path="/var/lib/kubelet/pods/adbe8207-26d0-4d0e-aacc-5f321184b53c/volumes" Mar 18 09:09:20.230186 master-0 kubenswrapper[28766]: E0318 09:09:20.230051 28766 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" podUID="a00bebfb-2c54-4888-9200-a5b96420fd37" Mar 18 09:09:20.752385 master-0 kubenswrapper[28766]: I0318 09:09:20.752253 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:09:25.321447 master-0 kubenswrapper[28766]: I0318 09:09:25.321309 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:09:25.329992 master-0 kubenswrapper[28766]: I0318 09:09:25.326177 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a00bebfb-2c54-4888-9200-a5b96420fd37-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-476ck\" (UID: \"a00bebfb-2c54-4888-9200-a5b96420fd37\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:09:25.554035 master-0 kubenswrapper[28766]: I0318 09:09:25.553938 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" Mar 18 09:09:25.983077 master-0 kubenswrapper[28766]: I0318 09:09:25.983019 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-476ck"] Mar 18 09:09:25.987118 master-0 kubenswrapper[28766]: W0318 09:09:25.987034 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda00bebfb_2c54_4888_9200_a5b96420fd37.slice/crio-121024a9d4c3e5580894e78d898d0460a4d0d8d7596c966beaff1433ad1d87d9 WatchSource:0}: Error finding container 121024a9d4c3e5580894e78d898d0460a4d0d8d7596c966beaff1433ad1d87d9: Status 404 returned error can't find the container with id 121024a9d4c3e5580894e78d898d0460a4d0d8d7596c966beaff1433ad1d87d9 Mar 18 09:09:26.804018 master-0 kubenswrapper[28766]: I0318 09:09:26.803912 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" event={"ID":"a00bebfb-2c54-4888-9200-a5b96420fd37","Type":"ContainerStarted","Data":"121024a9d4c3e5580894e78d898d0460a4d0d8d7596c966beaff1433ad1d87d9"} Mar 18 09:09:28.823578 master-0 kubenswrapper[28766]: I0318 09:09:28.823419 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" event={"ID":"a00bebfb-2c54-4888-9200-a5b96420fd37","Type":"ContainerStarted","Data":"b0b5cd163b6781a87d9e1762f30d2f1dc73ef200d2739e02837c1d02e5e55fa6"} Mar 18 09:09:28.850912 master-0 kubenswrapper[28766]: I0318 09:09:28.850691 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-476ck" podStartSLOduration=129.305342605 podStartE2EDuration="2m11.850582415s" podCreationTimestamp="2026-03-18 09:07:17 +0000 UTC" firstStartedPulling="2026-03-18 09:09:25.99522001 +0000 UTC m=+319.009478706" lastFinishedPulling="2026-03-18 09:09:28.54045985 +0000 UTC m=+321.554718516" observedRunningTime="2026-03-18 09:09:28.850333339 +0000 UTC m=+321.864592015" watchObservedRunningTime="2026-03-18 09:09:28.850582415 +0000 UTC m=+321.864841081" Mar 18 09:09:32.826569 master-0 kubenswrapper[28766]: I0318 09:09:32.826493 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:09:32.865123 master-0 kubenswrapper[28766]: I0318 09:09:32.865070 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:09:32.906811 master-0 kubenswrapper[28766]: I0318 09:09:32.906730 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:09:55.469173 master-0 kubenswrapper[28766]: I0318 09:09:55.469018 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c699958d9-6qrdl"] Mar 18 09:09:55.470182 master-0 kubenswrapper[28766]: E0318 09:09:55.469509 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" Mar 18 09:09:55.470182 master-0 kubenswrapper[28766]: I0318 09:09:55.469533 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" Mar 18 09:09:55.470182 master-0 kubenswrapper[28766]: I0318 09:09:55.469759 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="adbe8207-26d0-4d0e-aacc-5f321184b53c" containerName="console" Mar 18 09:09:55.470727 master-0 kubenswrapper[28766]: I0318 09:09:55.470682 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.485550 master-0 kubenswrapper[28766]: I0318 09:09:55.485484 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c699958d9-6qrdl"] Mar 18 09:09:55.498041 master-0 kubenswrapper[28766]: I0318 09:09:55.497884 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-trusted-ca-bundle\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.498041 master-0 kubenswrapper[28766]: I0318 09:09:55.497920 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-config\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.498041 master-0 kubenswrapper[28766]: I0318 09:09:55.497963 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-oauth-serving-cert\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.498041 master-0 kubenswrapper[28766]: I0318 09:09:55.497989 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-service-ca\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.498041 master-0 kubenswrapper[28766]: I0318 09:09:55.498007 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-oauth-config\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.498041 master-0 kubenswrapper[28766]: I0318 09:09:55.498038 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szx55\" (UniqueName: \"kubernetes.io/projected/c0d14eb4-043b-4c56-a271-261d96a2e4f7-kube-api-access-szx55\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.498041 master-0 kubenswrapper[28766]: I0318 09:09:55.498065 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-serving-cert\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.598963 master-0 kubenswrapper[28766]: I0318 09:09:55.598842 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-trusted-ca-bundle\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.598963 master-0 kubenswrapper[28766]: I0318 09:09:55.598919 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-config\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.598963 master-0 kubenswrapper[28766]: I0318 09:09:55.598962 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-oauth-serving-cert\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.599982 master-0 kubenswrapper[28766]: I0318 09:09:55.599950 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-oauth-serving-cert\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.600087 master-0 kubenswrapper[28766]: I0318 09:09:55.600003 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-service-ca\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.600087 master-0 kubenswrapper[28766]: I0318 09:09:55.600043 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-oauth-config\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.600198 master-0 kubenswrapper[28766]: I0318 09:09:55.600155 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-service-ca\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.600405 master-0 kubenswrapper[28766]: I0318 09:09:55.600359 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-trusted-ca-bundle\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.600582 master-0 kubenswrapper[28766]: I0318 09:09:55.600547 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szx55\" (UniqueName: \"kubernetes.io/projected/c0d14eb4-043b-4c56-a271-261d96a2e4f7-kube-api-access-szx55\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.600663 master-0 kubenswrapper[28766]: I0318 09:09:55.600607 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-serving-cert\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.600725 master-0 kubenswrapper[28766]: I0318 09:09:55.600648 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-config\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.604705 master-0 kubenswrapper[28766]: I0318 09:09:55.603781 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-oauth-config\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.604705 master-0 kubenswrapper[28766]: I0318 09:09:55.604530 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-serving-cert\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.615296 master-0 kubenswrapper[28766]: I0318 09:09:55.615187 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szx55\" (UniqueName: \"kubernetes.io/projected/c0d14eb4-043b-4c56-a271-261d96a2e4f7-kube-api-access-szx55\") pod \"console-6c699958d9-6qrdl\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:55.796314 master-0 kubenswrapper[28766]: I0318 09:09:55.796237 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:09:56.284628 master-0 kubenswrapper[28766]: I0318 09:09:56.284548 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c699958d9-6qrdl"] Mar 18 09:09:56.285256 master-0 kubenswrapper[28766]: W0318 09:09:56.285160 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0d14eb4_043b_4c56_a271_261d96a2e4f7.slice/crio-4b6db67b573f5388d6cc3d6aa815dd21ed28bf7fff6be7818875dc57618855d5 WatchSource:0}: Error finding container 4b6db67b573f5388d6cc3d6aa815dd21ed28bf7fff6be7818875dc57618855d5: Status 404 returned error can't find the container with id 4b6db67b573f5388d6cc3d6aa815dd21ed28bf7fff6be7818875dc57618855d5 Mar 18 09:09:57.098412 master-0 kubenswrapper[28766]: I0318 09:09:57.098324 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c699958d9-6qrdl" event={"ID":"c0d14eb4-043b-4c56-a271-261d96a2e4f7","Type":"ContainerStarted","Data":"6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9"} Mar 18 09:09:57.098412 master-0 kubenswrapper[28766]: I0318 09:09:57.098411 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c699958d9-6qrdl" event={"ID":"c0d14eb4-043b-4c56-a271-261d96a2e4f7","Type":"ContainerStarted","Data":"4b6db67b573f5388d6cc3d6aa815dd21ed28bf7fff6be7818875dc57618855d5"} Mar 18 09:09:57.123295 master-0 kubenswrapper[28766]: I0318 09:09:57.122466 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c699958d9-6qrdl" podStartSLOduration=2.1224455 podStartE2EDuration="2.1224455s" podCreationTimestamp="2026-03-18 09:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:09:57.117656915 +0000 UTC m=+350.131915581" watchObservedRunningTime="2026-03-18 09:09:57.1224455 +0000 UTC m=+350.136704166" Mar 18 09:10:05.796749 master-0 kubenswrapper[28766]: I0318 09:10:05.796621 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:10:05.796749 master-0 kubenswrapper[28766]: I0318 09:10:05.796749 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:10:05.803262 master-0 kubenswrapper[28766]: I0318 09:10:05.803215 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:10:06.192106 master-0 kubenswrapper[28766]: I0318 09:10:06.191932 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:10:06.309895 master-0 kubenswrapper[28766]: I0318 09:10:06.307613 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d98475dc4-pxrzb"] Mar 18 09:10:07.085959 master-0 kubenswrapper[28766]: I0318 09:10:07.085841 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 09:10:07.087311 master-0 kubenswrapper[28766]: I0318 09:10:07.087054 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.089683 master-0 kubenswrapper[28766]: I0318 09:10:07.089620 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 09:10:07.092325 master-0 kubenswrapper[28766]: I0318 09:10:07.092270 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v6k2v" Mar 18 09:10:07.110531 master-0 kubenswrapper[28766]: I0318 09:10:07.110454 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 09:10:07.131069 master-0 kubenswrapper[28766]: I0318 09:10:07.131003 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.131569 master-0 kubenswrapper[28766]: I0318 09:10:07.131528 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac062ca-3c0f-4695-88f9-429c01f79169-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.131922 master-0 kubenswrapper[28766]: I0318 09:10:07.131892 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-var-lock\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.234592 master-0 kubenswrapper[28766]: I0318 09:10:07.234481 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-var-lock\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.235154 master-0 kubenswrapper[28766]: I0318 09:10:07.234747 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.235154 master-0 kubenswrapper[28766]: I0318 09:10:07.235136 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac062ca-3c0f-4695-88f9-429c01f79169-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.237096 master-0 kubenswrapper[28766]: I0318 09:10:07.237014 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.241659 master-0 kubenswrapper[28766]: I0318 09:10:07.241599 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-var-lock\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.262212 master-0 kubenswrapper[28766]: I0318 09:10:07.262130 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 09:10:07.279138 master-0 kubenswrapper[28766]: I0318 09:10:07.279076 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac062ca-3c0f-4695-88f9-429c01f79169-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.420542 master-0 kubenswrapper[28766]: I0318 09:10:07.420369 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v6k2v" Mar 18 09:10:07.428616 master-0 kubenswrapper[28766]: I0318 09:10:07.428549 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:07.904781 master-0 kubenswrapper[28766]: I0318 09:10:07.904691 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 09:10:07.911116 master-0 kubenswrapper[28766]: W0318 09:10:07.911041 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0ac062ca_3c0f_4695_88f9_429c01f79169.slice/crio-4bc43eedea6f0bba97047c445396cfbece342bd9bd46256796201772a206170d WatchSource:0}: Error finding container 4bc43eedea6f0bba97047c445396cfbece342bd9bd46256796201772a206170d: Status 404 returned error can't find the container with id 4bc43eedea6f0bba97047c445396cfbece342bd9bd46256796201772a206170d Mar 18 09:10:08.209611 master-0 kubenswrapper[28766]: I0318 09:10:08.209300 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"0ac062ca-3c0f-4695-88f9-429c01f79169","Type":"ContainerStarted","Data":"4bc43eedea6f0bba97047c445396cfbece342bd9bd46256796201772a206170d"} Mar 18 09:10:09.218994 master-0 kubenswrapper[28766]: I0318 09:10:09.218872 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"0ac062ca-3c0f-4695-88f9-429c01f79169","Type":"ContainerStarted","Data":"676ed9acf292ebaa8ef6954335549c0cb6a32a8bb08d403196d310b8fc9c6007"} Mar 18 09:10:09.251988 master-0 kubenswrapper[28766]: I0318 09:10:09.251788 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=2.251750492 podStartE2EDuration="2.251750492s" podCreationTimestamp="2026-03-18 09:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:10:09.246262929 +0000 UTC m=+362.260521595" watchObservedRunningTime="2026-03-18 09:10:09.251750492 +0000 UTC m=+362.266009228" Mar 18 09:10:31.368641 master-0 kubenswrapper[28766]: I0318 09:10:31.368509 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d98475dc4-pxrzb" podUID="2cc1dd11-2b02-4e44-87da-192703ee51c4" containerName="console" containerID="cri-o://d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889" gracePeriod=15 Mar 18 09:10:31.852838 master-0 kubenswrapper[28766]: I0318 09:10:31.852767 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d98475dc4-pxrzb_2cc1dd11-2b02-4e44-87da-192703ee51c4/console/0.log" Mar 18 09:10:31.853155 master-0 kubenswrapper[28766]: I0318 09:10:31.852880 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:10:31.903308 master-0 kubenswrapper[28766]: I0318 09:10:31.903205 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-serving-cert\") pod \"2cc1dd11-2b02-4e44-87da-192703ee51c4\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " Mar 18 09:10:31.903586 master-0 kubenswrapper[28766]: I0318 09:10:31.903421 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-oauth-serving-cert\") pod \"2cc1dd11-2b02-4e44-87da-192703ee51c4\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " Mar 18 09:10:31.903586 master-0 kubenswrapper[28766]: I0318 09:10:31.903545 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-config\") pod \"2cc1dd11-2b02-4e44-87da-192703ee51c4\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " Mar 18 09:10:31.903734 master-0 kubenswrapper[28766]: I0318 09:10:31.903654 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-service-ca\") pod \"2cc1dd11-2b02-4e44-87da-192703ee51c4\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " Mar 18 09:10:31.904506 master-0 kubenswrapper[28766]: I0318 09:10:31.904403 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2cc1dd11-2b02-4e44-87da-192703ee51c4" (UID: "2cc1dd11-2b02-4e44-87da-192703ee51c4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:31.904506 master-0 kubenswrapper[28766]: I0318 09:10:31.904420 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-config" (OuterVolumeSpecName: "console-config") pod "2cc1dd11-2b02-4e44-87da-192703ee51c4" (UID: "2cc1dd11-2b02-4e44-87da-192703ee51c4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:31.904622 master-0 kubenswrapper[28766]: I0318 09:10:31.904585 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-service-ca" (OuterVolumeSpecName: "service-ca") pod "2cc1dd11-2b02-4e44-87da-192703ee51c4" (UID: "2cc1dd11-2b02-4e44-87da-192703ee51c4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:31.907714 master-0 kubenswrapper[28766]: I0318 09:10:31.907666 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2cc1dd11-2b02-4e44-87da-192703ee51c4" (UID: "2cc1dd11-2b02-4e44-87da-192703ee51c4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:32.005385 master-0 kubenswrapper[28766]: I0318 09:10:32.005283 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-oauth-config\") pod \"2cc1dd11-2b02-4e44-87da-192703ee51c4\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " Mar 18 09:10:32.005385 master-0 kubenswrapper[28766]: I0318 09:10:32.005400 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-trusted-ca-bundle\") pod \"2cc1dd11-2b02-4e44-87da-192703ee51c4\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " Mar 18 09:10:32.005689 master-0 kubenswrapper[28766]: I0318 09:10:32.005508 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-927j7\" (UniqueName: \"kubernetes.io/projected/2cc1dd11-2b02-4e44-87da-192703ee51c4-kube-api-access-927j7\") pod \"2cc1dd11-2b02-4e44-87da-192703ee51c4\" (UID: \"2cc1dd11-2b02-4e44-87da-192703ee51c4\") " Mar 18 09:10:32.006171 master-0 kubenswrapper[28766]: I0318 09:10:32.006100 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2cc1dd11-2b02-4e44-87da-192703ee51c4" (UID: "2cc1dd11-2b02-4e44-87da-192703ee51c4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:32.006262 master-0 kubenswrapper[28766]: I0318 09:10:32.006228 28766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:32.006302 master-0 kubenswrapper[28766]: I0318 09:10:32.006263 28766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:32.006302 master-0 kubenswrapper[28766]: I0318 09:10:32.006286 28766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:32.006360 master-0 kubenswrapper[28766]: I0318 09:10:32.006305 28766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:32.008519 master-0 kubenswrapper[28766]: I0318 09:10:32.008432 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2cc1dd11-2b02-4e44-87da-192703ee51c4" (UID: "2cc1dd11-2b02-4e44-87da-192703ee51c4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:32.009707 master-0 kubenswrapper[28766]: I0318 09:10:32.009461 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc1dd11-2b02-4e44-87da-192703ee51c4-kube-api-access-927j7" (OuterVolumeSpecName: "kube-api-access-927j7") pod "2cc1dd11-2b02-4e44-87da-192703ee51c4" (UID: "2cc1dd11-2b02-4e44-87da-192703ee51c4"). InnerVolumeSpecName "kube-api-access-927j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:10:32.107677 master-0 kubenswrapper[28766]: I0318 09:10:32.107551 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-927j7\" (UniqueName: \"kubernetes.io/projected/2cc1dd11-2b02-4e44-87da-192703ee51c4-kube-api-access-927j7\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:32.107677 master-0 kubenswrapper[28766]: I0318 09:10:32.107590 28766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cc1dd11-2b02-4e44-87da-192703ee51c4-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:32.107677 master-0 kubenswrapper[28766]: I0318 09:10:32.107600 28766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cc1dd11-2b02-4e44-87da-192703ee51c4-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:32.492604 master-0 kubenswrapper[28766]: I0318 09:10:32.492559 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d98475dc4-pxrzb_2cc1dd11-2b02-4e44-87da-192703ee51c4/console/0.log" Mar 18 09:10:32.493817 master-0 kubenswrapper[28766]: I0318 09:10:32.492615 28766 generic.go:334] "Generic (PLEG): container finished" podID="2cc1dd11-2b02-4e44-87da-192703ee51c4" containerID="d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889" exitCode=2 Mar 18 09:10:32.493817 master-0 kubenswrapper[28766]: I0318 09:10:32.492643 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d98475dc4-pxrzb" event={"ID":"2cc1dd11-2b02-4e44-87da-192703ee51c4","Type":"ContainerDied","Data":"d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889"} Mar 18 09:10:32.493817 master-0 kubenswrapper[28766]: I0318 09:10:32.492676 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d98475dc4-pxrzb" event={"ID":"2cc1dd11-2b02-4e44-87da-192703ee51c4","Type":"ContainerDied","Data":"358cdafd2820538c97eda87b1d23d0f6633403e241ca630f4a6d9d80ab6a5ec3"} Mar 18 09:10:32.493817 master-0 kubenswrapper[28766]: I0318 09:10:32.492694 28766 scope.go:117] "RemoveContainer" containerID="d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889" Mar 18 09:10:32.493817 master-0 kubenswrapper[28766]: I0318 09:10:32.492729 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d98475dc4-pxrzb" Mar 18 09:10:32.524962 master-0 kubenswrapper[28766]: I0318 09:10:32.524798 28766 scope.go:117] "RemoveContainer" containerID="d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889" Mar 18 09:10:32.525835 master-0 kubenswrapper[28766]: E0318 09:10:32.525647 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889\": container with ID starting with d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889 not found: ID does not exist" containerID="d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889" Mar 18 09:10:32.525835 master-0 kubenswrapper[28766]: I0318 09:10:32.525683 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889"} err="failed to get container status \"d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889\": rpc error: code = NotFound desc = could not find container \"d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889\": container with ID starting with d87cc1c6297175b4511a2b59b82264d6815e501833df6db147ef8663c6818889 not found: ID does not exist" Mar 18 09:10:32.543944 master-0 kubenswrapper[28766]: I0318 09:10:32.543824 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d98475dc4-pxrzb"] Mar 18 09:10:32.551185 master-0 kubenswrapper[28766]: I0318 09:10:32.551115 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d98475dc4-pxrzb"] Mar 18 09:10:33.259169 master-0 kubenswrapper[28766]: I0318 09:10:33.259099 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc1dd11-2b02-4e44-87da-192703ee51c4" path="/var/lib/kubelet/pods/2cc1dd11-2b02-4e44-87da-192703ee51c4/volumes" Mar 18 09:10:33.297335 master-0 kubenswrapper[28766]: I0318 09:10:33.297286 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:10:33.439224 master-0 kubenswrapper[28766]: I0318 09:10:33.439161 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log\") pod \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " Mar 18 09:10:33.439704 master-0 kubenswrapper[28766]: I0318 09:10:33.439666 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle\") pod \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " Mar 18 09:10:33.439916 master-0 kubenswrapper[28766]: I0318 09:10:33.439829 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log" (OuterVolumeSpecName: "audit-log") pod "5320a1da-262a-4b1b-93b4-1df9d4c26eec" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:10:33.440131 master-0 kubenswrapper[28766]: I0318 09:10:33.440100 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle\") pod \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " Mar 18 09:10:33.440342 master-0 kubenswrapper[28766]: I0318 09:10:33.440313 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls\") pod \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " Mar 18 09:10:33.440588 master-0 kubenswrapper[28766]: I0318 09:10:33.440559 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles\") pod \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " Mar 18 09:10:33.440816 master-0 kubenswrapper[28766]: I0318 09:10:33.440728 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "5320a1da-262a-4b1b-93b4-1df9d4c26eec" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:33.440975 master-0 kubenswrapper[28766]: I0318 09:10:33.440756 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs\") pod \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " Mar 18 09:10:33.441062 master-0 kubenswrapper[28766]: I0318 09:10:33.440961 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q8l2\" (UniqueName: \"kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2\") pod \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\" (UID: \"5320a1da-262a-4b1b-93b4-1df9d4c26eec\") " Mar 18 09:10:33.441696 master-0 kubenswrapper[28766]: I0318 09:10:33.441617 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "5320a1da-262a-4b1b-93b4-1df9d4c26eec" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:33.442372 master-0 kubenswrapper[28766]: I0318 09:10:33.442302 28766 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:33.442483 master-0 kubenswrapper[28766]: I0318 09:10:33.442365 28766 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5320a1da-262a-4b1b-93b4-1df9d4c26eec-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:33.442483 master-0 kubenswrapper[28766]: I0318 09:10:33.442404 28766 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5320a1da-262a-4b1b-93b4-1df9d4c26eec-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:33.444621 master-0 kubenswrapper[28766]: I0318 09:10:33.444564 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "5320a1da-262a-4b1b-93b4-1df9d4c26eec" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:33.444966 master-0 kubenswrapper[28766]: I0318 09:10:33.444925 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "5320a1da-262a-4b1b-93b4-1df9d4c26eec" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:33.445922 master-0 kubenswrapper[28766]: I0318 09:10:33.445656 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "5320a1da-262a-4b1b-93b4-1df9d4c26eec" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:33.446060 master-0 kubenswrapper[28766]: I0318 09:10:33.446004 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2" (OuterVolumeSpecName: "kube-api-access-9q8l2") pod "5320a1da-262a-4b1b-93b4-1df9d4c26eec" (UID: "5320a1da-262a-4b1b-93b4-1df9d4c26eec"). InnerVolumeSpecName "kube-api-access-9q8l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:10:33.501971 master-0 kubenswrapper[28766]: I0318 09:10:33.501765 28766 generic.go:334] "Generic (PLEG): container finished" podID="5320a1da-262a-4b1b-93b4-1df9d4c26eec" containerID="8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319" exitCode=0 Mar 18 09:10:33.501971 master-0 kubenswrapper[28766]: I0318 09:10:33.501830 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" Mar 18 09:10:33.501971 master-0 kubenswrapper[28766]: I0318 09:10:33.501883 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" event={"ID":"5320a1da-262a-4b1b-93b4-1df9d4c26eec","Type":"ContainerDied","Data":"8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319"} Mar 18 09:10:33.501971 master-0 kubenswrapper[28766]: I0318 09:10:33.501943 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-59f88c66c8-z4c2f" event={"ID":"5320a1da-262a-4b1b-93b4-1df9d4c26eec","Type":"ContainerDied","Data":"08c69ca72893cd876b16b5740d0ac91db39852d0fe47a473761270d55d7436d0"} Mar 18 09:10:33.501971 master-0 kubenswrapper[28766]: I0318 09:10:33.501967 28766 scope.go:117] "RemoveContainer" containerID="8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319" Mar 18 09:10:33.532730 master-0 kubenswrapper[28766]: I0318 09:10:33.532679 28766 scope.go:117] "RemoveContainer" containerID="8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319" Mar 18 09:10:33.533181 master-0 kubenswrapper[28766]: E0318 09:10:33.533153 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319\": container with ID starting with 8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319 not found: ID does not exist" containerID="8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319" Mar 18 09:10:33.533277 master-0 kubenswrapper[28766]: I0318 09:10:33.533183 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319"} err="failed to get container status \"8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319\": rpc error: code = NotFound desc = could not find container \"8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319\": container with ID starting with 8da1b208d66e950e641af5f888552a342bf881708d91891a3c2cad7c27648319 not found: ID does not exist" Mar 18 09:10:33.545973 master-0 kubenswrapper[28766]: I0318 09:10:33.545820 28766 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:33.545973 master-0 kubenswrapper[28766]: I0318 09:10:33.545928 28766 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:33.545973 master-0 kubenswrapper[28766]: I0318 09:10:33.545944 28766 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5320a1da-262a-4b1b-93b4-1df9d4c26eec-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:33.545973 master-0 kubenswrapper[28766]: I0318 09:10:33.545957 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q8l2\" (UniqueName: \"kubernetes.io/projected/5320a1da-262a-4b1b-93b4-1df9d4c26eec-kube-api-access-9q8l2\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:33.554783 master-0 kubenswrapper[28766]: I0318 09:10:33.554735 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-59f88c66c8-z4c2f"] Mar 18 09:10:33.559530 master-0 kubenswrapper[28766]: I0318 09:10:33.559477 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-59f88c66c8-z4c2f"] Mar 18 09:10:35.247337 master-0 kubenswrapper[28766]: I0318 09:10:35.247252 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5320a1da-262a-4b1b-93b4-1df9d4c26eec" path="/var/lib/kubelet/pods/5320a1da-262a-4b1b-93b4-1df9d4c26eec/volumes" Mar 18 09:10:41.001120 master-0 kubenswrapper[28766]: I0318 09:10:41.001034 28766 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:10:41.002220 master-0 kubenswrapper[28766]: I0318 09:10:41.001348 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="cluster-policy-controller" containerID="cri-o://746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d" gracePeriod=30 Mar 18 09:10:41.002220 master-0 kubenswrapper[28766]: I0318 09:10:41.001459 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a" gracePeriod=30 Mar 18 09:10:41.002220 master-0 kubenswrapper[28766]: I0318 09:10:41.001498 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager" containerID="cri-o://dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971" gracePeriod=30 Mar 18 09:10:41.002220 master-0 kubenswrapper[28766]: I0318 09:10:41.001470 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13" gracePeriod=30 Mar 18 09:10:41.005699 master-0 kubenswrapper[28766]: I0318 09:10:41.005604 28766 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:10:41.006379 master-0 kubenswrapper[28766]: E0318 09:10:41.006297 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc1dd11-2b02-4e44-87da-192703ee51c4" containerName="console" Mar 18 09:10:41.006379 master-0 kubenswrapper[28766]: I0318 09:10:41.006338 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc1dd11-2b02-4e44-87da-192703ee51c4" containerName="console" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: E0318 09:10:41.006404 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager-recovery-controller" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: I0318 09:10:41.006421 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager-recovery-controller" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: E0318 09:10:41.006443 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager-cert-syncer" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: I0318 09:10:41.006458 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager-cert-syncer" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: E0318 09:10:41.006487 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: I0318 09:10:41.006500 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: E0318 09:10:41.006526 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5320a1da-262a-4b1b-93b4-1df9d4c26eec" containerName="metrics-server" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: I0318 09:10:41.006539 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="5320a1da-262a-4b1b-93b4-1df9d4c26eec" containerName="metrics-server" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: E0318 09:10:41.006559 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="cluster-policy-controller" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: I0318 09:10:41.006573 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="cluster-policy-controller" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: E0318 09:10:41.006603 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager" Mar 18 09:10:41.006818 master-0 kubenswrapper[28766]: I0318 09:10:41.006616 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager" Mar 18 09:10:41.019934 master-0 kubenswrapper[28766]: I0318 09:10:41.009050 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager" Mar 18 09:10:41.019934 master-0 kubenswrapper[28766]: I0318 09:10:41.009190 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager-recovery-controller" Mar 18 09:10:41.019934 master-0 kubenswrapper[28766]: I0318 09:10:41.009272 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc1dd11-2b02-4e44-87da-192703ee51c4" containerName="console" Mar 18 09:10:41.019934 master-0 kubenswrapper[28766]: I0318 09:10:41.009328 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="cluster-policy-controller" Mar 18 09:10:41.019934 master-0 kubenswrapper[28766]: I0318 09:10:41.009387 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="5320a1da-262a-4b1b-93b4-1df9d4c26eec" containerName="metrics-server" Mar 18 09:10:41.019934 master-0 kubenswrapper[28766]: I0318 09:10:41.009418 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager-cert-syncer" Mar 18 09:10:41.023008 master-0 kubenswrapper[28766]: I0318 09:10:41.021761 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerName="kube-controller-manager" Mar 18 09:10:41.197159 master-0 kubenswrapper[28766]: I0318 09:10:41.196713 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1b0af84e08c0ebb6ef970331bd9379be-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1b0af84e08c0ebb6ef970331bd9379be\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:41.197159 master-0 kubenswrapper[28766]: I0318 09:10:41.196805 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1b0af84e08c0ebb6ef970331bd9379be-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1b0af84e08c0ebb6ef970331bd9379be\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:41.298801 master-0 kubenswrapper[28766]: I0318 09:10:41.298627 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1b0af84e08c0ebb6ef970331bd9379be-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1b0af84e08c0ebb6ef970331bd9379be\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:41.298801 master-0 kubenswrapper[28766]: I0318 09:10:41.298713 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1b0af84e08c0ebb6ef970331bd9379be-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1b0af84e08c0ebb6ef970331bd9379be\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:41.299451 master-0 kubenswrapper[28766]: I0318 09:10:41.298805 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1b0af84e08c0ebb6ef970331bd9379be-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1b0af84e08c0ebb6ef970331bd9379be\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:41.299451 master-0 kubenswrapper[28766]: I0318 09:10:41.299038 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1b0af84e08c0ebb6ef970331bd9379be-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1b0af84e08c0ebb6ef970331bd9379be\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:41.300264 master-0 kubenswrapper[28766]: I0318 09:10:41.300199 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_221b44bcdfcd6cb77b8e2c3e2f0f2d4d/kube-controller-manager-cert-syncer/0.log" Mar 18 09:10:41.301821 master-0 kubenswrapper[28766]: I0318 09:10:41.301693 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_221b44bcdfcd6cb77b8e2c3e2f0f2d4d/kube-controller-manager/0.log" Mar 18 09:10:41.301821 master-0 kubenswrapper[28766]: I0318 09:10:41.301816 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:41.305649 master-0 kubenswrapper[28766]: I0318 09:10:41.305579 28766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" podUID="1b0af84e08c0ebb6ef970331bd9379be" Mar 18 09:10:41.501839 master-0 kubenswrapper[28766]: I0318 09:10:41.501751 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir\") pod \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " Mar 18 09:10:41.502313 master-0 kubenswrapper[28766]: I0318 09:10:41.502014 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir\") pod \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\" (UID: \"221b44bcdfcd6cb77b8e2c3e2f0f2d4d\") " Mar 18 09:10:41.502313 master-0 kubenswrapper[28766]: I0318 09:10:41.502214 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "221b44bcdfcd6cb77b8e2c3e2f0f2d4d" (UID: "221b44bcdfcd6cb77b8e2c3e2f0f2d4d"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:10:41.502313 master-0 kubenswrapper[28766]: I0318 09:10:41.502296 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "221b44bcdfcd6cb77b8e2c3e2f0f2d4d" (UID: "221b44bcdfcd6cb77b8e2c3e2f0f2d4d"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:10:41.503359 master-0 kubenswrapper[28766]: I0318 09:10:41.503315 28766 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:41.503359 master-0 kubenswrapper[28766]: I0318 09:10:41.503347 28766 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/221b44bcdfcd6cb77b8e2c3e2f0f2d4d-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:41.606263 master-0 kubenswrapper[28766]: I0318 09:10:41.605991 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_221b44bcdfcd6cb77b8e2c3e2f0f2d4d/kube-controller-manager-cert-syncer/0.log" Mar 18 09:10:41.607616 master-0 kubenswrapper[28766]: I0318 09:10:41.607546 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_221b44bcdfcd6cb77b8e2c3e2f0f2d4d/kube-controller-manager/0.log" Mar 18 09:10:41.607783 master-0 kubenswrapper[28766]: I0318 09:10:41.607633 28766 generic.go:334] "Generic (PLEG): container finished" podID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerID="dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971" exitCode=0 Mar 18 09:10:41.607783 master-0 kubenswrapper[28766]: I0318 09:10:41.607664 28766 generic.go:334] "Generic (PLEG): container finished" podID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerID="4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a" exitCode=0 Mar 18 09:10:41.607783 master-0 kubenswrapper[28766]: I0318 09:10:41.607681 28766 generic.go:334] "Generic (PLEG): container finished" podID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerID="05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13" exitCode=2 Mar 18 09:10:41.607783 master-0 kubenswrapper[28766]: I0318 09:10:41.607694 28766 generic.go:334] "Generic (PLEG): container finished" podID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" containerID="746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d" exitCode=0 Mar 18 09:10:41.608547 master-0 kubenswrapper[28766]: I0318 09:10:41.607810 28766 scope.go:117] "RemoveContainer" containerID="dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971" Mar 18 09:10:41.608547 master-0 kubenswrapper[28766]: I0318 09:10:41.607904 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:41.618143 master-0 kubenswrapper[28766]: I0318 09:10:41.611808 28766 generic.go:334] "Generic (PLEG): container finished" podID="0ac062ca-3c0f-4695-88f9-429c01f79169" containerID="676ed9acf292ebaa8ef6954335549c0cb6a32a8bb08d403196d310b8fc9c6007" exitCode=0 Mar 18 09:10:41.618143 master-0 kubenswrapper[28766]: I0318 09:10:41.611913 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"0ac062ca-3c0f-4695-88f9-429c01f79169","Type":"ContainerDied","Data":"676ed9acf292ebaa8ef6954335549c0cb6a32a8bb08d403196d310b8fc9c6007"} Mar 18 09:10:41.618143 master-0 kubenswrapper[28766]: I0318 09:10:41.613247 28766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" podUID="1b0af84e08c0ebb6ef970331bd9379be" Mar 18 09:10:41.638686 master-0 kubenswrapper[28766]: I0318 09:10:41.638609 28766 scope.go:117] "RemoveContainer" containerID="4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a" Mar 18 09:10:41.659720 master-0 kubenswrapper[28766]: I0318 09:10:41.659643 28766 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" podUID="1b0af84e08c0ebb6ef970331bd9379be" Mar 18 09:10:41.663722 master-0 kubenswrapper[28766]: I0318 09:10:41.663671 28766 scope.go:117] "RemoveContainer" containerID="05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13" Mar 18 09:10:41.697325 master-0 kubenswrapper[28766]: I0318 09:10:41.697257 28766 scope.go:117] "RemoveContainer" containerID="746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d" Mar 18 09:10:41.723270 master-0 kubenswrapper[28766]: I0318 09:10:41.723204 28766 scope.go:117] "RemoveContainer" containerID="d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2" Mar 18 09:10:41.749280 master-0 kubenswrapper[28766]: I0318 09:10:41.749221 28766 scope.go:117] "RemoveContainer" containerID="dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971" Mar 18 09:10:41.750293 master-0 kubenswrapper[28766]: E0318 09:10:41.750204 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": container with ID starting with dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971 not found: ID does not exist" containerID="dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971" Mar 18 09:10:41.750389 master-0 kubenswrapper[28766]: I0318 09:10:41.750321 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971"} err="failed to get container status \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": rpc error: code = NotFound desc = could not find container \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": container with ID starting with dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971 not found: ID does not exist" Mar 18 09:10:41.750479 master-0 kubenswrapper[28766]: I0318 09:10:41.750388 28766 scope.go:117] "RemoveContainer" containerID="4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a" Mar 18 09:10:41.751055 master-0 kubenswrapper[28766]: E0318 09:10:41.751001 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": container with ID starting with 4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a not found: ID does not exist" containerID="4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a" Mar 18 09:10:41.751159 master-0 kubenswrapper[28766]: I0318 09:10:41.751057 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a"} err="failed to get container status \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": rpc error: code = NotFound desc = could not find container \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": container with ID starting with 4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a not found: ID does not exist" Mar 18 09:10:41.751159 master-0 kubenswrapper[28766]: I0318 09:10:41.751097 28766 scope.go:117] "RemoveContainer" containerID="05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13" Mar 18 09:10:41.751606 master-0 kubenswrapper[28766]: E0318 09:10:41.751551 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": container with ID starting with 05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13 not found: ID does not exist" containerID="05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13" Mar 18 09:10:41.751690 master-0 kubenswrapper[28766]: I0318 09:10:41.751601 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13"} err="failed to get container status \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": rpc error: code = NotFound desc = could not find container \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": container with ID starting with 05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13 not found: ID does not exist" Mar 18 09:10:41.751690 master-0 kubenswrapper[28766]: I0318 09:10:41.751645 28766 scope.go:117] "RemoveContainer" containerID="746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d" Mar 18 09:10:41.752129 master-0 kubenswrapper[28766]: E0318 09:10:41.752082 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": container with ID starting with 746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d not found: ID does not exist" containerID="746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d" Mar 18 09:10:41.752218 master-0 kubenswrapper[28766]: I0318 09:10:41.752125 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d"} err="failed to get container status \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": rpc error: code = NotFound desc = could not find container \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": container with ID starting with 746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d not found: ID does not exist" Mar 18 09:10:41.752218 master-0 kubenswrapper[28766]: I0318 09:10:41.752152 28766 scope.go:117] "RemoveContainer" containerID="d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2" Mar 18 09:10:41.753135 master-0 kubenswrapper[28766]: E0318 09:10:41.753079 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": container with ID starting with d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2 not found: ID does not exist" containerID="d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2" Mar 18 09:10:41.753229 master-0 kubenswrapper[28766]: I0318 09:10:41.753131 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2"} err="failed to get container status \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": rpc error: code = NotFound desc = could not find container \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": container with ID starting with d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2 not found: ID does not exist" Mar 18 09:10:41.753229 master-0 kubenswrapper[28766]: I0318 09:10:41.753160 28766 scope.go:117] "RemoveContainer" containerID="dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971" Mar 18 09:10:41.753742 master-0 kubenswrapper[28766]: I0318 09:10:41.753636 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971"} err="failed to get container status \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": rpc error: code = NotFound desc = could not find container \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": container with ID starting with dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971 not found: ID does not exist" Mar 18 09:10:41.753742 master-0 kubenswrapper[28766]: I0318 09:10:41.753715 28766 scope.go:117] "RemoveContainer" containerID="4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a" Mar 18 09:10:41.754637 master-0 kubenswrapper[28766]: I0318 09:10:41.754477 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a"} err="failed to get container status \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": rpc error: code = NotFound desc = could not find container \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": container with ID starting with 4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a not found: ID does not exist" Mar 18 09:10:41.754637 master-0 kubenswrapper[28766]: I0318 09:10:41.754515 28766 scope.go:117] "RemoveContainer" containerID="05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13" Mar 18 09:10:41.755050 master-0 kubenswrapper[28766]: I0318 09:10:41.754987 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13"} err="failed to get container status \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": rpc error: code = NotFound desc = could not find container \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": container with ID starting with 05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13 not found: ID does not exist" Mar 18 09:10:41.755050 master-0 kubenswrapper[28766]: I0318 09:10:41.755034 28766 scope.go:117] "RemoveContainer" containerID="746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d" Mar 18 09:10:41.755577 master-0 kubenswrapper[28766]: I0318 09:10:41.755523 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d"} err="failed to get container status \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": rpc error: code = NotFound desc = could not find container \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": container with ID starting with 746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d not found: ID does not exist" Mar 18 09:10:41.755577 master-0 kubenswrapper[28766]: I0318 09:10:41.755565 28766 scope.go:117] "RemoveContainer" containerID="d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2" Mar 18 09:10:41.756112 master-0 kubenswrapper[28766]: I0318 09:10:41.756051 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2"} err="failed to get container status \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": rpc error: code = NotFound desc = could not find container \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": container with ID starting with d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2 not found: ID does not exist" Mar 18 09:10:41.756112 master-0 kubenswrapper[28766]: I0318 09:10:41.756104 28766 scope.go:117] "RemoveContainer" containerID="dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971" Mar 18 09:10:41.756683 master-0 kubenswrapper[28766]: I0318 09:10:41.756624 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971"} err="failed to get container status \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": rpc error: code = NotFound desc = could not find container \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": container with ID starting with dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971 not found: ID does not exist" Mar 18 09:10:41.756683 master-0 kubenswrapper[28766]: I0318 09:10:41.756670 28766 scope.go:117] "RemoveContainer" containerID="4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a" Mar 18 09:10:41.757237 master-0 kubenswrapper[28766]: I0318 09:10:41.757182 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a"} err="failed to get container status \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": rpc error: code = NotFound desc = could not find container \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": container with ID starting with 4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a not found: ID does not exist" Mar 18 09:10:41.757237 master-0 kubenswrapper[28766]: I0318 09:10:41.757228 28766 scope.go:117] "RemoveContainer" containerID="05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13" Mar 18 09:10:41.757895 master-0 kubenswrapper[28766]: I0318 09:10:41.757824 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13"} err="failed to get container status \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": rpc error: code = NotFound desc = could not find container \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": container with ID starting with 05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13 not found: ID does not exist" Mar 18 09:10:41.757895 master-0 kubenswrapper[28766]: I0318 09:10:41.757891 28766 scope.go:117] "RemoveContainer" containerID="746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d" Mar 18 09:10:41.758348 master-0 kubenswrapper[28766]: I0318 09:10:41.758293 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d"} err="failed to get container status \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": rpc error: code = NotFound desc = could not find container \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": container with ID starting with 746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d not found: ID does not exist" Mar 18 09:10:41.758348 master-0 kubenswrapper[28766]: I0318 09:10:41.758333 28766 scope.go:117] "RemoveContainer" containerID="d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2" Mar 18 09:10:41.759216 master-0 kubenswrapper[28766]: I0318 09:10:41.759153 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2"} err="failed to get container status \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": rpc error: code = NotFound desc = could not find container \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": container with ID starting with d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2 not found: ID does not exist" Mar 18 09:10:41.759366 master-0 kubenswrapper[28766]: I0318 09:10:41.759209 28766 scope.go:117] "RemoveContainer" containerID="dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971" Mar 18 09:10:41.759900 master-0 kubenswrapper[28766]: I0318 09:10:41.759824 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971"} err="failed to get container status \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": rpc error: code = NotFound desc = could not find container \"dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971\": container with ID starting with dbff8077fc42ade1e74dbf92d755831f93316118c2e6792cfe86eff898126971 not found: ID does not exist" Mar 18 09:10:41.759900 master-0 kubenswrapper[28766]: I0318 09:10:41.759892 28766 scope.go:117] "RemoveContainer" containerID="4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a" Mar 18 09:10:41.761159 master-0 kubenswrapper[28766]: I0318 09:10:41.761096 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a"} err="failed to get container status \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": rpc error: code = NotFound desc = could not find container \"4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a\": container with ID starting with 4af328de55b37c7b433e572f442808a77f2decb59e8d9510022e17913ed81d1a not found: ID does not exist" Mar 18 09:10:41.761159 master-0 kubenswrapper[28766]: I0318 09:10:41.761151 28766 scope.go:117] "RemoveContainer" containerID="05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13" Mar 18 09:10:41.761553 master-0 kubenswrapper[28766]: I0318 09:10:41.761497 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13"} err="failed to get container status \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": rpc error: code = NotFound desc = could not find container \"05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13\": container with ID starting with 05af700b075e3edb59ffbfea5410d94b5accce0f9865867a900b8da32596bc13 not found: ID does not exist" Mar 18 09:10:41.761553 master-0 kubenswrapper[28766]: I0318 09:10:41.761542 28766 scope.go:117] "RemoveContainer" containerID="746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d" Mar 18 09:10:41.762059 master-0 kubenswrapper[28766]: I0318 09:10:41.762000 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d"} err="failed to get container status \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": rpc error: code = NotFound desc = could not find container \"746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d\": container with ID starting with 746628ca0f199826dfd3bc06cf87dbdd87e54d430ca10bd219daa6085cbf8a4d not found: ID does not exist" Mar 18 09:10:41.762059 master-0 kubenswrapper[28766]: I0318 09:10:41.762050 28766 scope.go:117] "RemoveContainer" containerID="d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2" Mar 18 09:10:41.762465 master-0 kubenswrapper[28766]: I0318 09:10:41.762403 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2"} err="failed to get container status \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": rpc error: code = NotFound desc = could not find container \"d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2\": container with ID starting with d97db5d026fe92a56cfbca9758234c41757acb8dc1abb2bb2c234db07d8dfdc2 not found: ID does not exist" Mar 18 09:10:42.971124 master-0 kubenswrapper[28766]: I0318 09:10:42.971027 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:43.033061 master-0 kubenswrapper[28766]: I0318 09:10:43.032974 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac062ca-3c0f-4695-88f9-429c01f79169-kube-api-access\") pod \"0ac062ca-3c0f-4695-88f9-429c01f79169\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " Mar 18 09:10:43.033061 master-0 kubenswrapper[28766]: I0318 09:10:43.033059 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-var-lock\") pod \"0ac062ca-3c0f-4695-88f9-429c01f79169\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " Mar 18 09:10:43.033320 master-0 kubenswrapper[28766]: I0318 09:10:43.033119 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-kubelet-dir\") pod \"0ac062ca-3c0f-4695-88f9-429c01f79169\" (UID: \"0ac062ca-3c0f-4695-88f9-429c01f79169\") " Mar 18 09:10:43.033873 master-0 kubenswrapper[28766]: I0318 09:10:43.033822 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0ac062ca-3c0f-4695-88f9-429c01f79169" (UID: "0ac062ca-3c0f-4695-88f9-429c01f79169"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:10:43.034220 master-0 kubenswrapper[28766]: I0318 09:10:43.034173 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-var-lock" (OuterVolumeSpecName: "var-lock") pod "0ac062ca-3c0f-4695-88f9-429c01f79169" (UID: "0ac062ca-3c0f-4695-88f9-429c01f79169"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:10:43.037517 master-0 kubenswrapper[28766]: I0318 09:10:43.037476 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ac062ca-3c0f-4695-88f9-429c01f79169-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0ac062ca-3c0f-4695-88f9-429c01f79169" (UID: "0ac062ca-3c0f-4695-88f9-429c01f79169"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:10:43.137137 master-0 kubenswrapper[28766]: I0318 09:10:43.137076 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac062ca-3c0f-4695-88f9-429c01f79169-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:43.137137 master-0 kubenswrapper[28766]: I0318 09:10:43.137118 28766 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:43.137137 master-0 kubenswrapper[28766]: I0318 09:10:43.137129 28766 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ac062ca-3c0f-4695-88f9-429c01f79169-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:43.250236 master-0 kubenswrapper[28766]: I0318 09:10:43.250136 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221b44bcdfcd6cb77b8e2c3e2f0f2d4d" path="/var/lib/kubelet/pods/221b44bcdfcd6cb77b8e2c3e2f0f2d4d/volumes" Mar 18 09:10:43.635926 master-0 kubenswrapper[28766]: I0318 09:10:43.635695 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"0ac062ca-3c0f-4695-88f9-429c01f79169","Type":"ContainerDied","Data":"4bc43eedea6f0bba97047c445396cfbece342bd9bd46256796201772a206170d"} Mar 18 09:10:43.635926 master-0 kubenswrapper[28766]: I0318 09:10:43.635772 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bc43eedea6f0bba97047c445396cfbece342bd9bd46256796201772a206170d" Mar 18 09:10:43.635926 master-0 kubenswrapper[28766]: I0318 09:10:43.635924 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:10:54.233183 master-0 kubenswrapper[28766]: I0318 09:10:54.233108 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:54.251288 master-0 kubenswrapper[28766]: I0318 09:10:54.251236 28766 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="073ceb20-6092-4533-a49b-d55d245ff5c6" Mar 18 09:10:54.251288 master-0 kubenswrapper[28766]: I0318 09:10:54.251281 28766 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="073ceb20-6092-4533-a49b-d55d245ff5c6" Mar 18 09:10:54.272101 master-0 kubenswrapper[28766]: I0318 09:10:54.272038 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:10:54.309185 master-0 kubenswrapper[28766]: I0318 09:10:54.309119 28766 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:54.316425 master-0 kubenswrapper[28766]: I0318 09:10:54.316368 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:10:54.361737 master-0 kubenswrapper[28766]: I0318 09:10:54.361631 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:10:54.381555 master-0 kubenswrapper[28766]: I0318 09:10:54.381467 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:10:54.740019 master-0 kubenswrapper[28766]: I0318 09:10:54.739938 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1b0af84e08c0ebb6ef970331bd9379be","Type":"ContainerStarted","Data":"fc5f5ea8593d67a4848dfbc173c7affc352e5e69f8825ae52b9b6d0961a54211"} Mar 18 09:10:54.740019 master-0 kubenswrapper[28766]: I0318 09:10:54.740000 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1b0af84e08c0ebb6ef970331bd9379be","Type":"ContainerStarted","Data":"bea35b7fa83f7694c626a03ff9e5c6969365da846f0fd00d6ed540badd4fa373"} Mar 18 09:10:55.750219 master-0 kubenswrapper[28766]: I0318 09:10:55.750155 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1b0af84e08c0ebb6ef970331bd9379be","Type":"ContainerStarted","Data":"5acbf0487a7ba80c2ab2f4ed5e20bcff3405dc26e313f261b52b64d4ba0bac49"} Mar 18 09:10:55.750219 master-0 kubenswrapper[28766]: I0318 09:10:55.750218 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1b0af84e08c0ebb6ef970331bd9379be","Type":"ContainerStarted","Data":"6f3d273526b129960cdc0e58f4fff2a113cf68ed1003d6dbf2ac4d424642f805"} Mar 18 09:10:56.760118 master-0 kubenswrapper[28766]: I0318 09:10:56.760026 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1b0af84e08c0ebb6ef970331bd9379be","Type":"ContainerStarted","Data":"9fb6a8690924f7101572022e0861956fee7d9589e5d14b0bc50a07771326ef5c"} Mar 18 09:10:56.798484 master-0 kubenswrapper[28766]: I0318 09:10:56.798335 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.798313233 podStartE2EDuration="2.798313233s" podCreationTimestamp="2026-03-18 09:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:10:56.794846333 +0000 UTC m=+409.809104999" watchObservedRunningTime="2026-03-18 09:10:56.798313233 +0000 UTC m=+409.812571899" Mar 18 09:11:04.362639 master-0 kubenswrapper[28766]: I0318 09:11:04.362517 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:11:04.362639 master-0 kubenswrapper[28766]: I0318 09:11:04.362621 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:11:04.362639 master-0 kubenswrapper[28766]: I0318 09:11:04.362636 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:11:04.362639 master-0 kubenswrapper[28766]: I0318 09:11:04.362648 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:11:04.370337 master-0 kubenswrapper[28766]: I0318 09:11:04.370274 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:11:04.374308 master-0 kubenswrapper[28766]: I0318 09:11:04.374255 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:11:04.848250 master-0 kubenswrapper[28766]: I0318 09:11:04.848167 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:11:05.854497 master-0 kubenswrapper[28766]: I0318 09:11:05.854425 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:11:11.299665 master-0 kubenswrapper[28766]: I0318 09:11:11.299601 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8"] Mar 18 09:11:11.302264 master-0 kubenswrapper[28766]: E0318 09:11:11.299891 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac062ca-3c0f-4695-88f9-429c01f79169" containerName="installer" Mar 18 09:11:11.302264 master-0 kubenswrapper[28766]: I0318 09:11:11.299904 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac062ca-3c0f-4695-88f9-429c01f79169" containerName="installer" Mar 18 09:11:11.302264 master-0 kubenswrapper[28766]: I0318 09:11:11.300055 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac062ca-3c0f-4695-88f9-429c01f79169" containerName="installer" Mar 18 09:11:11.302264 master-0 kubenswrapper[28766]: I0318 09:11:11.300985 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.321215 master-0 kubenswrapper[28766]: I0318 09:11:11.321136 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8"] Mar 18 09:11:11.349646 master-0 kubenswrapper[28766]: I0318 09:11:11.349593 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.349920 master-0 kubenswrapper[28766]: I0318 09:11:11.349662 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbgh\" (UniqueName: \"kubernetes.io/projected/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-kube-api-access-hrbgh\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.349920 master-0 kubenswrapper[28766]: I0318 09:11:11.349693 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.402574 master-0 kubenswrapper[28766]: I0318 09:11:11.402484 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-657dc898cd-mhjh7"] Mar 18 09:11:11.403786 master-0 kubenswrapper[28766]: I0318 09:11:11.403752 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.427343 master-0 kubenswrapper[28766]: I0318 09:11:11.427276 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-657dc898cd-mhjh7"] Mar 18 09:11:11.451147 master-0 kubenswrapper[28766]: I0318 09:11:11.451072 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-config\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.451147 master-0 kubenswrapper[28766]: I0318 09:11:11.451129 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-oauth-config\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.451147 master-0 kubenswrapper[28766]: I0318 09:11:11.451158 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-service-ca\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.451433 master-0 kubenswrapper[28766]: I0318 09:11:11.451226 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrbgh\" (UniqueName: \"kubernetes.io/projected/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-kube-api-access-hrbgh\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.451433 master-0 kubenswrapper[28766]: I0318 09:11:11.451306 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.451879 master-0 kubenswrapper[28766]: I0318 09:11:11.451832 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.452027 master-0 kubenswrapper[28766]: I0318 09:11:11.451996 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwp6\" (UniqueName: \"kubernetes.io/projected/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-kube-api-access-mcwp6\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.452067 master-0 kubenswrapper[28766]: I0318 09:11:11.452030 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-serving-cert\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.452067 master-0 kubenswrapper[28766]: I0318 09:11:11.452052 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-trusted-ca-bundle\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.452132 master-0 kubenswrapper[28766]: I0318 09:11:11.452075 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-oauth-serving-cert\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.452132 master-0 kubenswrapper[28766]: I0318 09:11:11.452106 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.452410 master-0 kubenswrapper[28766]: I0318 09:11:11.452383 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.470818 master-0 kubenswrapper[28766]: I0318 09:11:11.470767 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrbgh\" (UniqueName: \"kubernetes.io/projected/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-kube-api-access-hrbgh\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.553642 master-0 kubenswrapper[28766]: I0318 09:11:11.553475 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcwp6\" (UniqueName: \"kubernetes.io/projected/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-kube-api-access-mcwp6\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.553642 master-0 kubenswrapper[28766]: I0318 09:11:11.553541 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-serving-cert\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.555135 master-0 kubenswrapper[28766]: I0318 09:11:11.553754 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-trusted-ca-bundle\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.555257 master-0 kubenswrapper[28766]: I0318 09:11:11.555171 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-oauth-serving-cert\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.555389 master-0 kubenswrapper[28766]: I0318 09:11:11.555349 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-config\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.555475 master-0 kubenswrapper[28766]: I0318 09:11:11.555409 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-trusted-ca-bundle\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.555475 master-0 kubenswrapper[28766]: I0318 09:11:11.555426 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-oauth-config\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.556142 master-0 kubenswrapper[28766]: I0318 09:11:11.556092 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-oauth-serving-cert\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.556246 master-0 kubenswrapper[28766]: I0318 09:11:11.556123 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-service-ca\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.557043 master-0 kubenswrapper[28766]: I0318 09:11:11.557003 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-service-ca\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.557253 master-0 kubenswrapper[28766]: I0318 09:11:11.557153 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-config\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.558987 master-0 kubenswrapper[28766]: I0318 09:11:11.558918 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-serving-cert\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.561512 master-0 kubenswrapper[28766]: I0318 09:11:11.561455 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-oauth-config\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.571691 master-0 kubenswrapper[28766]: I0318 09:11:11.571625 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcwp6\" (UniqueName: \"kubernetes.io/projected/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-kube-api-access-mcwp6\") pod \"console-657dc898cd-mhjh7\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:11.615109 master-0 kubenswrapper[28766]: I0318 09:11:11.614494 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:11.727351 master-0 kubenswrapper[28766]: I0318 09:11:11.725434 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:12.060779 master-0 kubenswrapper[28766]: I0318 09:11:12.060709 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8"] Mar 18 09:11:12.064722 master-0 kubenswrapper[28766]: W0318 09:11:12.064675 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4bc4192_d18c_4dea_b0be_ce01eee9ad54.slice/crio-99913cc35ecca775da4031b65e73263349e22696297f5532333d1d965702ecca WatchSource:0}: Error finding container 99913cc35ecca775da4031b65e73263349e22696297f5532333d1d965702ecca: Status 404 returned error can't find the container with id 99913cc35ecca775da4031b65e73263349e22696297f5532333d1d965702ecca Mar 18 09:11:12.165061 master-0 kubenswrapper[28766]: I0318 09:11:12.165011 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-657dc898cd-mhjh7"] Mar 18 09:11:12.165666 master-0 kubenswrapper[28766]: W0318 09:11:12.165578 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b6dbc8f_2a16_4c68_a049_1f5b271623ff.slice/crio-a37b58a26a583b730242a5866957d99a663ec889b8f7223d9b8898968f3b61bb WatchSource:0}: Error finding container a37b58a26a583b730242a5866957d99a663ec889b8f7223d9b8898968f3b61bb: Status 404 returned error can't find the container with id a37b58a26a583b730242a5866957d99a663ec889b8f7223d9b8898968f3b61bb Mar 18 09:11:12.905263 master-0 kubenswrapper[28766]: I0318 09:11:12.905201 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-657dc898cd-mhjh7" event={"ID":"8b6dbc8f-2a16-4c68-a049-1f5b271623ff","Type":"ContainerStarted","Data":"f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3"} Mar 18 09:11:12.905900 master-0 kubenswrapper[28766]: I0318 09:11:12.905264 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-657dc898cd-mhjh7" event={"ID":"8b6dbc8f-2a16-4c68-a049-1f5b271623ff","Type":"ContainerStarted","Data":"a37b58a26a583b730242a5866957d99a663ec889b8f7223d9b8898968f3b61bb"} Mar 18 09:11:12.906710 master-0 kubenswrapper[28766]: I0318 09:11:12.906665 28766 generic.go:334] "Generic (PLEG): container finished" podID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerID="8796d4a9d63d27a70962d2a6eb04d89bd1750a3847a9ba7d9c390b05955224fa" exitCode=0 Mar 18 09:11:12.906787 master-0 kubenswrapper[28766]: I0318 09:11:12.906714 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" event={"ID":"d4bc4192-d18c-4dea-b0be-ce01eee9ad54","Type":"ContainerDied","Data":"8796d4a9d63d27a70962d2a6eb04d89bd1750a3847a9ba7d9c390b05955224fa"} Mar 18 09:11:12.906787 master-0 kubenswrapper[28766]: I0318 09:11:12.906748 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" event={"ID":"d4bc4192-d18c-4dea-b0be-ce01eee9ad54","Type":"ContainerStarted","Data":"99913cc35ecca775da4031b65e73263349e22696297f5532333d1d965702ecca"} Mar 18 09:11:12.908163 master-0 kubenswrapper[28766]: I0318 09:11:12.908105 28766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 09:11:12.927679 master-0 kubenswrapper[28766]: I0318 09:11:12.927595 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-657dc898cd-mhjh7" podStartSLOduration=1.9275774829999999 podStartE2EDuration="1.927577483s" podCreationTimestamp="2026-03-18 09:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:11:12.922877521 +0000 UTC m=+425.937136227" watchObservedRunningTime="2026-03-18 09:11:12.927577483 +0000 UTC m=+425.941836149" Mar 18 09:11:14.922819 master-0 kubenswrapper[28766]: I0318 09:11:14.922704 28766 generic.go:334] "Generic (PLEG): container finished" podID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerID="c05e436378342eeeeaf0402c6b0ec79aeb116307027209b5356d91c7d008e157" exitCode=0 Mar 18 09:11:14.922819 master-0 kubenswrapper[28766]: I0318 09:11:14.922761 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" event={"ID":"d4bc4192-d18c-4dea-b0be-ce01eee9ad54","Type":"ContainerDied","Data":"c05e436378342eeeeaf0402c6b0ec79aeb116307027209b5356d91c7d008e157"} Mar 18 09:11:15.936669 master-0 kubenswrapper[28766]: I0318 09:11:15.936561 28766 generic.go:334] "Generic (PLEG): container finished" podID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerID="3b43250ae009aa1d85763ef4cef107807456dfb2d7612d8d08c36f5648afa82b" exitCode=0 Mar 18 09:11:15.936669 master-0 kubenswrapper[28766]: I0318 09:11:15.936657 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" event={"ID":"d4bc4192-d18c-4dea-b0be-ce01eee9ad54","Type":"ContainerDied","Data":"3b43250ae009aa1d85763ef4cef107807456dfb2d7612d8d08c36f5648afa82b"} Mar 18 09:11:17.358918 master-0 kubenswrapper[28766]: I0318 09:11:17.358797 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:17.460281 master-0 kubenswrapper[28766]: I0318 09:11:17.460172 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-util\") pod \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " Mar 18 09:11:17.460281 master-0 kubenswrapper[28766]: I0318 09:11:17.460269 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-bundle\") pod \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " Mar 18 09:11:17.460983 master-0 kubenswrapper[28766]: I0318 09:11:17.460362 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrbgh\" (UniqueName: \"kubernetes.io/projected/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-kube-api-access-hrbgh\") pod \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\" (UID: \"d4bc4192-d18c-4dea-b0be-ce01eee9ad54\") " Mar 18 09:11:17.463256 master-0 kubenswrapper[28766]: I0318 09:11:17.463003 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-bundle" (OuterVolumeSpecName: "bundle") pod "d4bc4192-d18c-4dea-b0be-ce01eee9ad54" (UID: "d4bc4192-d18c-4dea-b0be-ce01eee9ad54"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:17.466447 master-0 kubenswrapper[28766]: I0318 09:11:17.466371 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-kube-api-access-hrbgh" (OuterVolumeSpecName: "kube-api-access-hrbgh") pod "d4bc4192-d18c-4dea-b0be-ce01eee9ad54" (UID: "d4bc4192-d18c-4dea-b0be-ce01eee9ad54"). InnerVolumeSpecName "kube-api-access-hrbgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:11:17.494925 master-0 kubenswrapper[28766]: I0318 09:11:17.494779 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-util" (OuterVolumeSpecName: "util") pod "d4bc4192-d18c-4dea-b0be-ce01eee9ad54" (UID: "d4bc4192-d18c-4dea-b0be-ce01eee9ad54"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:17.564689 master-0 kubenswrapper[28766]: I0318 09:11:17.564418 28766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:17.564689 master-0 kubenswrapper[28766]: I0318 09:11:17.564548 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrbgh\" (UniqueName: \"kubernetes.io/projected/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-kube-api-access-hrbgh\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:17.564689 master-0 kubenswrapper[28766]: I0318 09:11:17.564578 28766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bc4192-d18c-4dea-b0be-ce01eee9ad54-util\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:17.976352 master-0 kubenswrapper[28766]: I0318 09:11:17.976110 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" event={"ID":"d4bc4192-d18c-4dea-b0be-ce01eee9ad54","Type":"ContainerDied","Data":"99913cc35ecca775da4031b65e73263349e22696297f5532333d1d965702ecca"} Mar 18 09:11:17.976352 master-0 kubenswrapper[28766]: I0318 09:11:17.976184 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99913cc35ecca775da4031b65e73263349e22696297f5532333d1d965702ecca" Mar 18 09:11:17.977285 master-0 kubenswrapper[28766]: I0318 09:11:17.977194 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4tl8s8" Mar 18 09:11:21.726258 master-0 kubenswrapper[28766]: I0318 09:11:21.726152 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:21.726258 master-0 kubenswrapper[28766]: I0318 09:11:21.726228 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:21.738381 master-0 kubenswrapper[28766]: I0318 09:11:21.738287 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:22.019218 master-0 kubenswrapper[28766]: I0318 09:11:22.019128 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:11:22.139087 master-0 kubenswrapper[28766]: I0318 09:11:22.138997 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c699958d9-6qrdl"] Mar 18 09:11:28.154237 master-0 kubenswrapper[28766]: I0318 09:11:28.154167 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-5bdfbd4c57-vn2r8"] Mar 18 09:11:28.155109 master-0 kubenswrapper[28766]: E0318 09:11:28.154458 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerName="util" Mar 18 09:11:28.155109 master-0 kubenswrapper[28766]: I0318 09:11:28.154473 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerName="util" Mar 18 09:11:28.155109 master-0 kubenswrapper[28766]: E0318 09:11:28.154506 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerName="pull" Mar 18 09:11:28.155109 master-0 kubenswrapper[28766]: I0318 09:11:28.154514 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerName="pull" Mar 18 09:11:28.155109 master-0 kubenswrapper[28766]: E0318 09:11:28.154531 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerName="extract" Mar 18 09:11:28.155109 master-0 kubenswrapper[28766]: I0318 09:11:28.154540 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerName="extract" Mar 18 09:11:28.155109 master-0 kubenswrapper[28766]: I0318 09:11:28.154709 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bc4192-d18c-4dea-b0be-ce01eee9ad54" containerName="extract" Mar 18 09:11:28.155379 master-0 kubenswrapper[28766]: I0318 09:11:28.155347 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.161722 master-0 kubenswrapper[28766]: I0318 09:11:28.161113 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 18 09:11:28.161722 master-0 kubenswrapper[28766]: I0318 09:11:28.161134 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 18 09:11:28.161722 master-0 kubenswrapper[28766]: I0318 09:11:28.161464 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 18 09:11:28.161722 master-0 kubenswrapper[28766]: I0318 09:11:28.161608 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 18 09:11:28.163621 master-0 kubenswrapper[28766]: I0318 09:11:28.163582 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 18 09:11:28.177347 master-0 kubenswrapper[28766]: I0318 09:11:28.177239 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-5bdfbd4c57-vn2r8"] Mar 18 09:11:28.277269 master-0 kubenswrapper[28766]: I0318 09:11:28.277188 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn24r\" (UniqueName: \"kubernetes.io/projected/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-kube-api-access-sn24r\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.277269 master-0 kubenswrapper[28766]: I0318 09:11:28.277271 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-socket-dir\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.277710 master-0 kubenswrapper[28766]: I0318 09:11:28.277331 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-apiservice-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.277710 master-0 kubenswrapper[28766]: I0318 09:11:28.277427 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-metrics-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.277710 master-0 kubenswrapper[28766]: I0318 09:11:28.277478 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-webhook-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.379825 master-0 kubenswrapper[28766]: I0318 09:11:28.379738 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-metrics-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.380253 master-0 kubenswrapper[28766]: I0318 09:11:28.380232 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-webhook-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.380429 master-0 kubenswrapper[28766]: I0318 09:11:28.380409 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn24r\" (UniqueName: \"kubernetes.io/projected/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-kube-api-access-sn24r\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.380574 master-0 kubenswrapper[28766]: I0318 09:11:28.380555 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-socket-dir\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.380757 master-0 kubenswrapper[28766]: I0318 09:11:28.380734 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-apiservice-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.381599 master-0 kubenswrapper[28766]: I0318 09:11:28.381506 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-socket-dir\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.383836 master-0 kubenswrapper[28766]: I0318 09:11:28.383753 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-webhook-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.384229 master-0 kubenswrapper[28766]: I0318 09:11:28.384177 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-apiservice-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.384229 master-0 kubenswrapper[28766]: I0318 09:11:28.384218 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-metrics-cert\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.402650 master-0 kubenswrapper[28766]: I0318 09:11:28.402586 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn24r\" (UniqueName: \"kubernetes.io/projected/22e6caaa-74bd-42d6-b2b6-21900a13bbb8-kube-api-access-sn24r\") pod \"lvms-operator-5bdfbd4c57-vn2r8\" (UID: \"22e6caaa-74bd-42d6-b2b6-21900a13bbb8\") " pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.470943 master-0 kubenswrapper[28766]: I0318 09:11:28.470721 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:28.949927 master-0 kubenswrapper[28766]: I0318 09:11:28.949833 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-5bdfbd4c57-vn2r8"] Mar 18 09:11:28.954999 master-0 kubenswrapper[28766]: W0318 09:11:28.954954 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22e6caaa_74bd_42d6_b2b6_21900a13bbb8.slice/crio-aa4b6af8c4cbc96e97cac377e952b2a849441c4fd21fbb2ccfed9beb49de113a WatchSource:0}: Error finding container aa4b6af8c4cbc96e97cac377e952b2a849441c4fd21fbb2ccfed9beb49de113a: Status 404 returned error can't find the container with id aa4b6af8c4cbc96e97cac377e952b2a849441c4fd21fbb2ccfed9beb49de113a Mar 18 09:11:29.070427 master-0 kubenswrapper[28766]: I0318 09:11:29.070349 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" event={"ID":"22e6caaa-74bd-42d6-b2b6-21900a13bbb8","Type":"ContainerStarted","Data":"aa4b6af8c4cbc96e97cac377e952b2a849441c4fd21fbb2ccfed9beb49de113a"} Mar 18 09:11:35.140985 master-0 kubenswrapper[28766]: I0318 09:11:35.140905 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" event={"ID":"22e6caaa-74bd-42d6-b2b6-21900a13bbb8","Type":"ContainerStarted","Data":"fe2bac5b2e12714ec234a98ca800164f9f6c2a6aa0372fb87056f7a5d787e4c8"} Mar 18 09:11:35.141875 master-0 kubenswrapper[28766]: I0318 09:11:35.141447 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:35.148472 master-0 kubenswrapper[28766]: I0318 09:11:35.148401 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" Mar 18 09:11:35.168809 master-0 kubenswrapper[28766]: I0318 09:11:35.168689 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-5bdfbd4c57-vn2r8" podStartSLOduration=1.6434027599999999 podStartE2EDuration="7.168670378s" podCreationTimestamp="2026-03-18 09:11:28 +0000 UTC" firstStartedPulling="2026-03-18 09:11:28.95766166 +0000 UTC m=+441.971920326" lastFinishedPulling="2026-03-18 09:11:34.482929268 +0000 UTC m=+447.497187944" observedRunningTime="2026-03-18 09:11:35.166519972 +0000 UTC m=+448.180778648" watchObservedRunningTime="2026-03-18 09:11:35.168670378 +0000 UTC m=+448.182929044" Mar 18 09:11:38.635235 master-0 kubenswrapper[28766]: I0318 09:11:38.635107 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm"] Mar 18 09:11:38.638085 master-0 kubenswrapper[28766]: I0318 09:11:38.638024 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.649350 master-0 kubenswrapper[28766]: I0318 09:11:38.649294 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm"] Mar 18 09:11:38.666241 master-0 kubenswrapper[28766]: I0318 09:11:38.666040 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7p7\" (UniqueName: \"kubernetes.io/projected/6497aa9d-7ede-44de-8eb0-3896e4fb291a-kube-api-access-cm7p7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.666646 master-0 kubenswrapper[28766]: I0318 09:11:38.666618 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.666903 master-0 kubenswrapper[28766]: I0318 09:11:38.666835 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.768338 master-0 kubenswrapper[28766]: I0318 09:11:38.768257 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.768338 master-0 kubenswrapper[28766]: I0318 09:11:38.768325 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7p7\" (UniqueName: \"kubernetes.io/projected/6497aa9d-7ede-44de-8eb0-3896e4fb291a-kube-api-access-cm7p7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.768602 master-0 kubenswrapper[28766]: I0318 09:11:38.768398 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.769086 master-0 kubenswrapper[28766]: I0318 09:11:38.769054 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.769562 master-0 kubenswrapper[28766]: I0318 09:11:38.769533 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.787540 master-0 kubenswrapper[28766]: I0318 09:11:38.787501 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7p7\" (UniqueName: \"kubernetes.io/projected/6497aa9d-7ede-44de-8eb0-3896e4fb291a-kube-api-access-cm7p7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:38.960390 master-0 kubenswrapper[28766]: I0318 09:11:38.960285 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:39.403424 master-0 kubenswrapper[28766]: I0318 09:11:39.403388 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm"] Mar 18 09:11:40.184131 master-0 kubenswrapper[28766]: I0318 09:11:40.183961 28766 generic.go:334] "Generic (PLEG): container finished" podID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerID="ca4aa23b28169d5a4c8107df11d0b312167ad08d5105b09175e442fa0e77beeb" exitCode=0 Mar 18 09:11:40.184131 master-0 kubenswrapper[28766]: I0318 09:11:40.184043 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" event={"ID":"6497aa9d-7ede-44de-8eb0-3896e4fb291a","Type":"ContainerDied","Data":"ca4aa23b28169d5a4c8107df11d0b312167ad08d5105b09175e442fa0e77beeb"} Mar 18 09:11:40.184131 master-0 kubenswrapper[28766]: I0318 09:11:40.184099 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" event={"ID":"6497aa9d-7ede-44de-8eb0-3896e4fb291a","Type":"ContainerStarted","Data":"a8d92165ce5ead038f9721cfbe3dd99b6e4d764a2b199553c9550776dd28752c"} Mar 18 09:11:40.264681 master-0 kubenswrapper[28766]: I0318 09:11:40.264596 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf"] Mar 18 09:11:40.268192 master-0 kubenswrapper[28766]: I0318 09:11:40.268108 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.274606 master-0 kubenswrapper[28766]: I0318 09:11:40.274545 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf"] Mar 18 09:11:40.402952 master-0 kubenswrapper[28766]: I0318 09:11:40.402841 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhdvj\" (UniqueName: \"kubernetes.io/projected/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-kube-api-access-zhdvj\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.403230 master-0 kubenswrapper[28766]: I0318 09:11:40.403144 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.403426 master-0 kubenswrapper[28766]: I0318 09:11:40.403403 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.504629 master-0 kubenswrapper[28766]: I0318 09:11:40.504549 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.506225 master-0 kubenswrapper[28766]: I0318 09:11:40.504699 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.506293 master-0 kubenswrapper[28766]: I0318 09:11:40.506275 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhdvj\" (UniqueName: \"kubernetes.io/projected/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-kube-api-access-zhdvj\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.506663 master-0 kubenswrapper[28766]: I0318 09:11:40.506620 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.512116 master-0 kubenswrapper[28766]: I0318 09:11:40.511882 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.522662 master-0 kubenswrapper[28766]: I0318 09:11:40.522601 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhdvj\" (UniqueName: \"kubernetes.io/projected/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-kube-api-access-zhdvj\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.590108 master-0 kubenswrapper[28766]: I0318 09:11:40.590036 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:40.846190 master-0 kubenswrapper[28766]: I0318 09:11:40.846035 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5"] Mar 18 09:11:40.847549 master-0 kubenswrapper[28766]: I0318 09:11:40.847517 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:40.854243 master-0 kubenswrapper[28766]: I0318 09:11:40.854186 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5"] Mar 18 09:11:41.026063 master-0 kubenswrapper[28766]: I0318 09:11:41.025967 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.026314 master-0 kubenswrapper[28766]: I0318 09:11:41.026183 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.026314 master-0 kubenswrapper[28766]: I0318 09:11:41.026252 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhq79\" (UniqueName: \"kubernetes.io/projected/1debf5d5-671e-448c-afd3-d3c2733215c3-kube-api-access-xhq79\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.050286 master-0 kubenswrapper[28766]: I0318 09:11:41.050228 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf"] Mar 18 09:11:41.128490 master-0 kubenswrapper[28766]: I0318 09:11:41.128410 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.128679 master-0 kubenswrapper[28766]: I0318 09:11:41.128550 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhq79\" (UniqueName: \"kubernetes.io/projected/1debf5d5-671e-448c-afd3-d3c2733215c3-kube-api-access-xhq79\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.128792 master-0 kubenswrapper[28766]: I0318 09:11:41.128753 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.129324 master-0 kubenswrapper[28766]: I0318 09:11:41.129256 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.129626 master-0 kubenswrapper[28766]: I0318 09:11:41.129272 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.157739 master-0 kubenswrapper[28766]: I0318 09:11:41.157698 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhq79\" (UniqueName: \"kubernetes.io/projected/1debf5d5-671e-448c-afd3-d3c2733215c3-kube-api-access-xhq79\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.183002 master-0 kubenswrapper[28766]: I0318 09:11:41.174362 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:41.212049 master-0 kubenswrapper[28766]: I0318 09:11:41.211993 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" event={"ID":"8608d755-8c25-49f6-bbd6-5d56a69b3ee5","Type":"ContainerStarted","Data":"90bcc9fd43529e98ede103deb141626cfe43a4d411aebfecf37ab8c955efd998"} Mar 18 09:11:41.653987 master-0 kubenswrapper[28766]: I0318 09:11:41.653938 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5"] Mar 18 09:11:41.655609 master-0 kubenswrapper[28766]: W0318 09:11:41.655573 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1debf5d5_671e_448c_afd3_d3c2733215c3.slice/crio-d56a3e501094f24f1cbeb59e0e6c04a5a02ff86023504b47817e4def614c9c1a WatchSource:0}: Error finding container d56a3e501094f24f1cbeb59e0e6c04a5a02ff86023504b47817e4def614c9c1a: Status 404 returned error can't find the container with id d56a3e501094f24f1cbeb59e0e6c04a5a02ff86023504b47817e4def614c9c1a Mar 18 09:11:42.223733 master-0 kubenswrapper[28766]: I0318 09:11:42.223678 28766 generic.go:334] "Generic (PLEG): container finished" podID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerID="6d4a21d1917e2590723e5caa0fd77d70cdfe354b1c49b938ce899509b4eda86a" exitCode=0 Mar 18 09:11:42.223733 master-0 kubenswrapper[28766]: I0318 09:11:42.223766 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" event={"ID":"8608d755-8c25-49f6-bbd6-5d56a69b3ee5","Type":"ContainerDied","Data":"6d4a21d1917e2590723e5caa0fd77d70cdfe354b1c49b938ce899509b4eda86a"} Mar 18 09:11:42.227970 master-0 kubenswrapper[28766]: I0318 09:11:42.227888 28766 generic.go:334] "Generic (PLEG): container finished" podID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerID="efc68c51e25393bd810d443205ff89a52aff61d9356f1a3fe8b11a2c9819c9e0" exitCode=0 Mar 18 09:11:42.227970 master-0 kubenswrapper[28766]: I0318 09:11:42.227948 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" event={"ID":"1debf5d5-671e-448c-afd3-d3c2733215c3","Type":"ContainerDied","Data":"efc68c51e25393bd810d443205ff89a52aff61d9356f1a3fe8b11a2c9819c9e0"} Mar 18 09:11:42.228395 master-0 kubenswrapper[28766]: I0318 09:11:42.227982 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" event={"ID":"1debf5d5-671e-448c-afd3-d3c2733215c3","Type":"ContainerStarted","Data":"d56a3e501094f24f1cbeb59e0e6c04a5a02ff86023504b47817e4def614c9c1a"} Mar 18 09:11:45.260287 master-0 kubenswrapper[28766]: I0318 09:11:45.260212 28766 generic.go:334] "Generic (PLEG): container finished" podID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerID="3f8adb81a2cf667f34cfd0ff22aaa107b45ec711ef848ef1e45cba71703c0d0f" exitCode=0 Mar 18 09:11:45.261083 master-0 kubenswrapper[28766]: I0318 09:11:45.260296 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" event={"ID":"8608d755-8c25-49f6-bbd6-5d56a69b3ee5","Type":"ContainerDied","Data":"3f8adb81a2cf667f34cfd0ff22aaa107b45ec711ef848ef1e45cba71703c0d0f"} Mar 18 09:11:45.263065 master-0 kubenswrapper[28766]: I0318 09:11:45.263029 28766 generic.go:334] "Generic (PLEG): container finished" podID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerID="7bff1c1d83aefa624e489283bb2f47b4def1a48dd39bf566d898cbeda24192e9" exitCode=0 Mar 18 09:11:45.263335 master-0 kubenswrapper[28766]: I0318 09:11:45.263095 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" event={"ID":"1debf5d5-671e-448c-afd3-d3c2733215c3","Type":"ContainerDied","Data":"7bff1c1d83aefa624e489283bb2f47b4def1a48dd39bf566d898cbeda24192e9"} Mar 18 09:11:45.266881 master-0 kubenswrapper[28766]: I0318 09:11:45.266820 28766 generic.go:334] "Generic (PLEG): container finished" podID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerID="b3d4961e37e3ee7cf98a7c6bbe8a92013f8bf7ecb95a182622e8f86ea5cccb8c" exitCode=0 Mar 18 09:11:45.266958 master-0 kubenswrapper[28766]: I0318 09:11:45.266915 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" event={"ID":"6497aa9d-7ede-44de-8eb0-3896e4fb291a","Type":"ContainerDied","Data":"b3d4961e37e3ee7cf98a7c6bbe8a92013f8bf7ecb95a182622e8f86ea5cccb8c"} Mar 18 09:11:46.281269 master-0 kubenswrapper[28766]: I0318 09:11:46.281177 28766 generic.go:334] "Generic (PLEG): container finished" podID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerID="f1aff5f23ee7bf45bc84112f571ebddef38ef0aac0c3faee316a2db4bfe0bacc" exitCode=0 Mar 18 09:11:46.282127 master-0 kubenswrapper[28766]: I0318 09:11:46.281314 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" event={"ID":"8608d755-8c25-49f6-bbd6-5d56a69b3ee5","Type":"ContainerDied","Data":"f1aff5f23ee7bf45bc84112f571ebddef38ef0aac0c3faee316a2db4bfe0bacc"} Mar 18 09:11:46.285072 master-0 kubenswrapper[28766]: I0318 09:11:46.284993 28766 generic.go:334] "Generic (PLEG): container finished" podID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerID="44ea91ac212717f203690db5d1944c141f83e97588d86c9cfad6adf5a349ec56" exitCode=0 Mar 18 09:11:46.285232 master-0 kubenswrapper[28766]: I0318 09:11:46.285119 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" event={"ID":"1debf5d5-671e-448c-afd3-d3c2733215c3","Type":"ContainerDied","Data":"44ea91ac212717f203690db5d1944c141f83e97588d86c9cfad6adf5a349ec56"} Mar 18 09:11:46.288338 master-0 kubenswrapper[28766]: I0318 09:11:46.288273 28766 generic.go:334] "Generic (PLEG): container finished" podID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerID="3fd81b173c39a6a7c81a4758ed0c4e00e3f2ee9fe8c42d0e01dd3c0860cec6ef" exitCode=0 Mar 18 09:11:46.288338 master-0 kubenswrapper[28766]: I0318 09:11:46.288321 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" event={"ID":"6497aa9d-7ede-44de-8eb0-3896e4fb291a","Type":"ContainerDied","Data":"3fd81b173c39a6a7c81a4758ed0c4e00e3f2ee9fe8c42d0e01dd3c0860cec6ef"} Mar 18 09:11:47.179502 master-0 kubenswrapper[28766]: I0318 09:11:47.179407 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6c699958d9-6qrdl" podUID="c0d14eb4-043b-4c56-a271-261d96a2e4f7" containerName="console" containerID="cri-o://6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9" gracePeriod=15 Mar 18 09:11:47.846061 master-0 kubenswrapper[28766]: I0318 09:11:47.842120 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:47.961143 master-0 kubenswrapper[28766]: I0318 09:11:47.961107 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:47.965810 master-0 kubenswrapper[28766]: I0318 09:11:47.965763 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:47.967421 master-0 kubenswrapper[28766]: I0318 09:11:47.967380 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhdvj\" (UniqueName: \"kubernetes.io/projected/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-kube-api-access-zhdvj\") pod \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " Mar 18 09:11:47.967501 master-0 kubenswrapper[28766]: I0318 09:11:47.967459 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-util\") pod \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " Mar 18 09:11:47.971896 master-0 kubenswrapper[28766]: I0318 09:11:47.967626 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-bundle\") pod \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\" (UID: \"8608d755-8c25-49f6-bbd6-5d56a69b3ee5\") " Mar 18 09:11:47.971896 master-0 kubenswrapper[28766]: I0318 09:11:47.968534 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-bundle" (OuterVolumeSpecName: "bundle") pod "8608d755-8c25-49f6-bbd6-5d56a69b3ee5" (UID: "8608d755-8c25-49f6-bbd6-5d56a69b3ee5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:47.975144 master-0 kubenswrapper[28766]: I0318 09:11:47.975074 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-kube-api-access-zhdvj" (OuterVolumeSpecName: "kube-api-access-zhdvj") pod "8608d755-8c25-49f6-bbd6-5d56a69b3ee5" (UID: "8608d755-8c25-49f6-bbd6-5d56a69b3ee5"). InnerVolumeSpecName "kube-api-access-zhdvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:11:47.978638 master-0 kubenswrapper[28766]: I0318 09:11:47.978593 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-util" (OuterVolumeSpecName: "util") pod "8608d755-8c25-49f6-bbd6-5d56a69b3ee5" (UID: "8608d755-8c25-49f6-bbd6-5d56a69b3ee5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:47.982271 master-0 kubenswrapper[28766]: I0318 09:11:47.982228 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c699958d9-6qrdl_c0d14eb4-043b-4c56-a271-261d96a2e4f7/console/0.log" Mar 18 09:11:47.982358 master-0 kubenswrapper[28766]: I0318 09:11:47.982315 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:11:48.068548 master-0 kubenswrapper[28766]: I0318 09:11:48.068471 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-bundle\") pod \"1debf5d5-671e-448c-afd3-d3c2733215c3\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " Mar 18 09:11:48.068773 master-0 kubenswrapper[28766]: I0318 09:11:48.068569 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szx55\" (UniqueName: \"kubernetes.io/projected/c0d14eb4-043b-4c56-a271-261d96a2e4f7-kube-api-access-szx55\") pod \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " Mar 18 09:11:48.068773 master-0 kubenswrapper[28766]: I0318 09:11:48.068599 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhq79\" (UniqueName: \"kubernetes.io/projected/1debf5d5-671e-448c-afd3-d3c2733215c3-kube-api-access-xhq79\") pod \"1debf5d5-671e-448c-afd3-d3c2733215c3\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " Mar 18 09:11:48.068773 master-0 kubenswrapper[28766]: I0318 09:11:48.068624 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-bundle\") pod \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " Mar 18 09:11:48.068773 master-0 kubenswrapper[28766]: I0318 09:11:48.068657 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-config\") pod \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " Mar 18 09:11:48.068773 master-0 kubenswrapper[28766]: I0318 09:11:48.068681 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-oauth-config\") pod \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " Mar 18 09:11:48.068773 master-0 kubenswrapper[28766]: I0318 09:11:48.068732 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-util\") pod \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " Mar 18 09:11:48.069007 master-0 kubenswrapper[28766]: I0318 09:11:48.068778 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-serving-cert\") pod \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " Mar 18 09:11:48.069007 master-0 kubenswrapper[28766]: I0318 09:11:48.068796 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-service-ca\") pod \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " Mar 18 09:11:48.069007 master-0 kubenswrapper[28766]: I0318 09:11:48.068812 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm7p7\" (UniqueName: \"kubernetes.io/projected/6497aa9d-7ede-44de-8eb0-3896e4fb291a-kube-api-access-cm7p7\") pod \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\" (UID: \"6497aa9d-7ede-44de-8eb0-3896e4fb291a\") " Mar 18 09:11:48.069007 master-0 kubenswrapper[28766]: I0318 09:11:48.068835 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-trusted-ca-bundle\") pod \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " Mar 18 09:11:48.069007 master-0 kubenswrapper[28766]: I0318 09:11:48.068883 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-util\") pod \"1debf5d5-671e-448c-afd3-d3c2733215c3\" (UID: \"1debf5d5-671e-448c-afd3-d3c2733215c3\") " Mar 18 09:11:48.069007 master-0 kubenswrapper[28766]: I0318 09:11:48.068919 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-oauth-serving-cert\") pod \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\" (UID: \"c0d14eb4-043b-4c56-a271-261d96a2e4f7\") " Mar 18 09:11:48.069266 master-0 kubenswrapper[28766]: I0318 09:11:48.069237 28766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.069266 master-0 kubenswrapper[28766]: I0318 09:11:48.069260 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhdvj\" (UniqueName: \"kubernetes.io/projected/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-kube-api-access-zhdvj\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.069335 master-0 kubenswrapper[28766]: I0318 09:11:48.069269 28766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8608d755-8c25-49f6-bbd6-5d56a69b3ee5-util\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.069731 master-0 kubenswrapper[28766]: I0318 09:11:48.069700 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c0d14eb4-043b-4c56-a271-261d96a2e4f7" (UID: "c0d14eb4-043b-4c56-a271-261d96a2e4f7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:11:48.070248 master-0 kubenswrapper[28766]: I0318 09:11:48.070221 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-bundle" (OuterVolumeSpecName: "bundle") pod "1debf5d5-671e-448c-afd3-d3c2733215c3" (UID: "1debf5d5-671e-448c-afd3-d3c2733215c3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:48.070772 master-0 kubenswrapper[28766]: I0318 09:11:48.070737 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c0d14eb4-043b-4c56-a271-261d96a2e4f7" (UID: "c0d14eb4-043b-4c56-a271-261d96a2e4f7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:11:48.071890 master-0 kubenswrapper[28766]: I0318 09:11:48.071841 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-bundle" (OuterVolumeSpecName: "bundle") pod "6497aa9d-7ede-44de-8eb0-3896e4fb291a" (UID: "6497aa9d-7ede-44de-8eb0-3896e4fb291a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:48.072908 master-0 kubenswrapper[28766]: I0318 09:11:48.072875 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6497aa9d-7ede-44de-8eb0-3896e4fb291a-kube-api-access-cm7p7" (OuterVolumeSpecName: "kube-api-access-cm7p7") pod "6497aa9d-7ede-44de-8eb0-3896e4fb291a" (UID: "6497aa9d-7ede-44de-8eb0-3896e4fb291a"). InnerVolumeSpecName "kube-api-access-cm7p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:11:48.073623 master-0 kubenswrapper[28766]: I0318 09:11:48.073606 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-service-ca" (OuterVolumeSpecName: "service-ca") pod "c0d14eb4-043b-4c56-a271-261d96a2e4f7" (UID: "c0d14eb4-043b-4c56-a271-261d96a2e4f7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:11:48.073740 master-0 kubenswrapper[28766]: I0318 09:11:48.073626 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c0d14eb4-043b-4c56-a271-261d96a2e4f7" (UID: "c0d14eb4-043b-4c56-a271-261d96a2e4f7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:11:48.074108 master-0 kubenswrapper[28766]: I0318 09:11:48.074083 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-config" (OuterVolumeSpecName: "console-config") pod "c0d14eb4-043b-4c56-a271-261d96a2e4f7" (UID: "c0d14eb4-043b-4c56-a271-261d96a2e4f7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:11:48.075718 master-0 kubenswrapper[28766]: I0318 09:11:48.075698 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d14eb4-043b-4c56-a271-261d96a2e4f7-kube-api-access-szx55" (OuterVolumeSpecName: "kube-api-access-szx55") pod "c0d14eb4-043b-4c56-a271-261d96a2e4f7" (UID: "c0d14eb4-043b-4c56-a271-261d96a2e4f7"). InnerVolumeSpecName "kube-api-access-szx55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:11:48.076825 master-0 kubenswrapper[28766]: I0318 09:11:48.076749 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c0d14eb4-043b-4c56-a271-261d96a2e4f7" (UID: "c0d14eb4-043b-4c56-a271-261d96a2e4f7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:11:48.077502 master-0 kubenswrapper[28766]: I0318 09:11:48.077467 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1debf5d5-671e-448c-afd3-d3c2733215c3-kube-api-access-xhq79" (OuterVolumeSpecName: "kube-api-access-xhq79") pod "1debf5d5-671e-448c-afd3-d3c2733215c3" (UID: "1debf5d5-671e-448c-afd3-d3c2733215c3"). InnerVolumeSpecName "kube-api-access-xhq79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:11:48.081206 master-0 kubenswrapper[28766]: I0318 09:11:48.081173 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-util" (OuterVolumeSpecName: "util") pod "1debf5d5-671e-448c-afd3-d3c2733215c3" (UID: "1debf5d5-671e-448c-afd3-d3c2733215c3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:48.082484 master-0 kubenswrapper[28766]: I0318 09:11:48.082431 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-util" (OuterVolumeSpecName: "util") pod "6497aa9d-7ede-44de-8eb0-3896e4fb291a" (UID: "6497aa9d-7ede-44de-8eb0-3896e4fb291a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171060 28766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171106 28766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171120 28766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-util\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171129 28766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0d14eb4-043b-4c56-a271-261d96a2e4f7-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171138 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm7p7\" (UniqueName: \"kubernetes.io/projected/6497aa9d-7ede-44de-8eb0-3896e4fb291a-kube-api-access-cm7p7\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171146 28766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171154 28766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171163 28766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-util\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171171 28766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0d14eb4-043b-4c56-a271-261d96a2e4f7-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171180 28766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1debf5d5-671e-448c-afd3-d3c2733215c3-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171188 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szx55\" (UniqueName: \"kubernetes.io/projected/c0d14eb4-043b-4c56-a271-261d96a2e4f7-kube-api-access-szx55\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171199 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhq79\" (UniqueName: \"kubernetes.io/projected/1debf5d5-671e-448c-afd3-d3c2733215c3-kube-api-access-xhq79\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.171200 master-0 kubenswrapper[28766]: I0318 09:11:48.171207 28766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6497aa9d-7ede-44de-8eb0-3896e4fb291a-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:48.305353 master-0 kubenswrapper[28766]: I0318 09:11:48.305291 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c699958d9-6qrdl_c0d14eb4-043b-4c56-a271-261d96a2e4f7/console/0.log" Mar 18 09:11:48.305631 master-0 kubenswrapper[28766]: I0318 09:11:48.305386 28766 generic.go:334] "Generic (PLEG): container finished" podID="c0d14eb4-043b-4c56-a271-261d96a2e4f7" containerID="6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9" exitCode=2 Mar 18 09:11:48.305631 master-0 kubenswrapper[28766]: I0318 09:11:48.305440 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c699958d9-6qrdl" Mar 18 09:11:48.305631 master-0 kubenswrapper[28766]: I0318 09:11:48.305429 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c699958d9-6qrdl" event={"ID":"c0d14eb4-043b-4c56-a271-261d96a2e4f7","Type":"ContainerDied","Data":"6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9"} Mar 18 09:11:48.305631 master-0 kubenswrapper[28766]: I0318 09:11:48.305616 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c699958d9-6qrdl" event={"ID":"c0d14eb4-043b-4c56-a271-261d96a2e4f7","Type":"ContainerDied","Data":"4b6db67b573f5388d6cc3d6aa815dd21ed28bf7fff6be7818875dc57618855d5"} Mar 18 09:11:48.305758 master-0 kubenswrapper[28766]: I0318 09:11:48.305644 28766 scope.go:117] "RemoveContainer" containerID="6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9" Mar 18 09:11:48.310703 master-0 kubenswrapper[28766]: I0318 09:11:48.310678 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" event={"ID":"8608d755-8c25-49f6-bbd6-5d56a69b3ee5","Type":"ContainerDied","Data":"90bcc9fd43529e98ede103deb141626cfe43a4d411aebfecf37ab8c955efd998"} Mar 18 09:11:48.310770 master-0 kubenswrapper[28766]: I0318 09:11:48.310707 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90bcc9fd43529e98ede103deb141626cfe43a4d411aebfecf37ab8c955efd998" Mar 18 09:11:48.310806 master-0 kubenswrapper[28766]: I0318 09:11:48.310777 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16c4sf" Mar 18 09:11:48.313579 master-0 kubenswrapper[28766]: I0318 09:11:48.313556 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" Mar 18 09:11:48.313742 master-0 kubenswrapper[28766]: I0318 09:11:48.313705 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v89b5" event={"ID":"1debf5d5-671e-448c-afd3-d3c2733215c3","Type":"ContainerDied","Data":"d56a3e501094f24f1cbeb59e0e6c04a5a02ff86023504b47817e4def614c9c1a"} Mar 18 09:11:48.313832 master-0 kubenswrapper[28766]: I0318 09:11:48.313815 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d56a3e501094f24f1cbeb59e0e6c04a5a02ff86023504b47817e4def614c9c1a" Mar 18 09:11:48.316639 master-0 kubenswrapper[28766]: I0318 09:11:48.316618 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" event={"ID":"6497aa9d-7ede-44de-8eb0-3896e4fb291a","Type":"ContainerDied","Data":"a8d92165ce5ead038f9721cfbe3dd99b6e4d764a2b199553c9550776dd28752c"} Mar 18 09:11:48.316723 master-0 kubenswrapper[28766]: I0318 09:11:48.316710 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8d92165ce5ead038f9721cfbe3dd99b6e4d764a2b199553c9550776dd28752c" Mar 18 09:11:48.316801 master-0 kubenswrapper[28766]: I0318 09:11:48.316686 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59r9cm" Mar 18 09:11:48.335845 master-0 kubenswrapper[28766]: I0318 09:11:48.335810 28766 scope.go:117] "RemoveContainer" containerID="6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9" Mar 18 09:11:48.339075 master-0 kubenswrapper[28766]: E0318 09:11:48.339045 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9\": container with ID starting with 6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9 not found: ID does not exist" containerID="6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9" Mar 18 09:11:48.339234 master-0 kubenswrapper[28766]: I0318 09:11:48.339204 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9"} err="failed to get container status \"6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9\": rpc error: code = NotFound desc = could not find container \"6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9\": container with ID starting with 6fb4e78dc34b66ddd402e5cbfa9e341c7c34ed73015c9ed632573c6a1068b4f9 not found: ID does not exist" Mar 18 09:11:48.357556 master-0 kubenswrapper[28766]: I0318 09:11:48.357504 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c699958d9-6qrdl"] Mar 18 09:11:48.367542 master-0 kubenswrapper[28766]: I0318 09:11:48.367479 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6c699958d9-6qrdl"] Mar 18 09:11:49.247422 master-0 kubenswrapper[28766]: I0318 09:11:49.247320 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d14eb4-043b-4c56-a271-261d96a2e4f7" path="/var/lib/kubelet/pods/c0d14eb4-043b-4c56-a271-261d96a2e4f7/volumes" Mar 18 09:11:49.461383 master-0 kubenswrapper[28766]: I0318 09:11:49.461273 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc"] Mar 18 09:11:49.461929 master-0 kubenswrapper[28766]: E0318 09:11:49.461801 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerName="pull" Mar 18 09:11:49.461929 master-0 kubenswrapper[28766]: I0318 09:11:49.461842 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerName="pull" Mar 18 09:11:49.461929 master-0 kubenswrapper[28766]: E0318 09:11:49.461923 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerName="extract" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: I0318 09:11:49.461942 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerName="extract" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: E0318 09:11:49.461970 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerName="util" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: I0318 09:11:49.461986 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerName="util" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: E0318 09:11:49.462016 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerName="util" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: I0318 09:11:49.462032 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerName="util" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: E0318 09:11:49.462063 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerName="pull" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: I0318 09:11:49.462078 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerName="pull" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: E0318 09:11:49.462119 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerName="extract" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: I0318 09:11:49.462136 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerName="extract" Mar 18 09:11:49.462174 master-0 kubenswrapper[28766]: E0318 09:11:49.462175 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerName="extract" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: I0318 09:11:49.462193 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerName="extract" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: E0318 09:11:49.462225 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerName="util" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: I0318 09:11:49.462241 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerName="util" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: E0318 09:11:49.462270 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerName="pull" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: I0318 09:11:49.462285 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerName="pull" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: E0318 09:11:49.462314 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d14eb4-043b-4c56-a271-261d96a2e4f7" containerName="console" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: I0318 09:11:49.462331 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d14eb4-043b-4c56-a271-261d96a2e4f7" containerName="console" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: I0318 09:11:49.462652 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="6497aa9d-7ede-44de-8eb0-3896e4fb291a" containerName="extract" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: I0318 09:11:49.462707 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d14eb4-043b-4c56-a271-261d96a2e4f7" containerName="console" Mar 18 09:11:49.462768 master-0 kubenswrapper[28766]: I0318 09:11:49.462750 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8608d755-8c25-49f6-bbd6-5d56a69b3ee5" containerName="extract" Mar 18 09:11:49.463199 master-0 kubenswrapper[28766]: I0318 09:11:49.462782 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="1debf5d5-671e-448c-afd3-d3c2733215c3" containerName="extract" Mar 18 09:11:49.465379 master-0 kubenswrapper[28766]: I0318 09:11:49.465326 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.494430 master-0 kubenswrapper[28766]: I0318 09:11:49.494355 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc"] Mar 18 09:11:49.600932 master-0 kubenswrapper[28766]: I0318 09:11:49.600705 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq78t\" (UniqueName: \"kubernetes.io/projected/20263e06-a85d-4747-94ac-b8ea083c0749-kube-api-access-cq78t\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.600932 master-0 kubenswrapper[28766]: I0318 09:11:49.600850 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.600932 master-0 kubenswrapper[28766]: I0318 09:11:49.600934 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.703010 master-0 kubenswrapper[28766]: I0318 09:11:49.702692 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq78t\" (UniqueName: \"kubernetes.io/projected/20263e06-a85d-4747-94ac-b8ea083c0749-kube-api-access-cq78t\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.703010 master-0 kubenswrapper[28766]: I0318 09:11:49.703006 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.703423 master-0 kubenswrapper[28766]: I0318 09:11:49.703039 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.704193 master-0 kubenswrapper[28766]: I0318 09:11:49.704139 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.704431 master-0 kubenswrapper[28766]: I0318 09:11:49.704382 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.740249 master-0 kubenswrapper[28766]: I0318 09:11:49.740184 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq78t\" (UniqueName: \"kubernetes.io/projected/20263e06-a85d-4747-94ac-b8ea083c0749-kube-api-access-cq78t\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:49.806984 master-0 kubenswrapper[28766]: I0318 09:11:49.806930 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:50.344766 master-0 kubenswrapper[28766]: I0318 09:11:50.344076 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc"] Mar 18 09:11:51.351608 master-0 kubenswrapper[28766]: I0318 09:11:51.351538 28766 generic.go:334] "Generic (PLEG): container finished" podID="20263e06-a85d-4747-94ac-b8ea083c0749" containerID="2586359cc2d2866d41b4e759d9065005b44a17c0ac6108a406a6008af45a2ac6" exitCode=0 Mar 18 09:11:51.351608 master-0 kubenswrapper[28766]: I0318 09:11:51.351590 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" event={"ID":"20263e06-a85d-4747-94ac-b8ea083c0749","Type":"ContainerDied","Data":"2586359cc2d2866d41b4e759d9065005b44a17c0ac6108a406a6008af45a2ac6"} Mar 18 09:11:51.351608 master-0 kubenswrapper[28766]: I0318 09:11:51.351617 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" event={"ID":"20263e06-a85d-4747-94ac-b8ea083c0749","Type":"ContainerStarted","Data":"9e1553dec1e19028679381f4e38383078fa95a5c9c89446714ba0510cdf9128b"} Mar 18 09:11:53.053924 master-0 kubenswrapper[28766]: I0318 09:11:53.053798 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn"] Mar 18 09:11:53.055357 master-0 kubenswrapper[28766]: I0318 09:11:53.055317 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" Mar 18 09:11:53.061135 master-0 kubenswrapper[28766]: I0318 09:11:53.061069 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 18 09:11:53.061398 master-0 kubenswrapper[28766]: I0318 09:11:53.061349 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 18 09:11:53.082279 master-0 kubenswrapper[28766]: I0318 09:11:53.082197 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn"] Mar 18 09:11:53.207117 master-0 kubenswrapper[28766]: I0318 09:11:53.207050 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/034a9024-3d82-4832-b9b3-b61a08718bf8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tgpwn\" (UID: \"034a9024-3d82-4832-b9b3-b61a08718bf8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" Mar 18 09:11:53.208311 master-0 kubenswrapper[28766]: I0318 09:11:53.207540 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqh5p\" (UniqueName: \"kubernetes.io/projected/034a9024-3d82-4832-b9b3-b61a08718bf8-kube-api-access-lqh5p\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tgpwn\" (UID: \"034a9024-3d82-4832-b9b3-b61a08718bf8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" Mar 18 09:11:53.310762 master-0 kubenswrapper[28766]: I0318 09:11:53.310607 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/034a9024-3d82-4832-b9b3-b61a08718bf8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tgpwn\" (UID: \"034a9024-3d82-4832-b9b3-b61a08718bf8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" Mar 18 09:11:53.311045 master-0 kubenswrapper[28766]: I0318 09:11:53.310901 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqh5p\" (UniqueName: \"kubernetes.io/projected/034a9024-3d82-4832-b9b3-b61a08718bf8-kube-api-access-lqh5p\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tgpwn\" (UID: \"034a9024-3d82-4832-b9b3-b61a08718bf8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" Mar 18 09:11:53.311428 master-0 kubenswrapper[28766]: I0318 09:11:53.311373 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/034a9024-3d82-4832-b9b3-b61a08718bf8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tgpwn\" (UID: \"034a9024-3d82-4832-b9b3-b61a08718bf8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" Mar 18 09:11:53.329061 master-0 kubenswrapper[28766]: I0318 09:11:53.329000 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqh5p\" (UniqueName: \"kubernetes.io/projected/034a9024-3d82-4832-b9b3-b61a08718bf8-kube-api-access-lqh5p\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tgpwn\" (UID: \"034a9024-3d82-4832-b9b3-b61a08718bf8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" Mar 18 09:11:53.373823 master-0 kubenswrapper[28766]: I0318 09:11:53.373731 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" Mar 18 09:11:53.377359 master-0 kubenswrapper[28766]: I0318 09:11:53.377288 28766 generic.go:334] "Generic (PLEG): container finished" podID="20263e06-a85d-4747-94ac-b8ea083c0749" containerID="b599bed12c5348208e9e8e4d3e47a10f0b651022d40fc4cfa2d78de701422799" exitCode=0 Mar 18 09:11:53.377479 master-0 kubenswrapper[28766]: I0318 09:11:53.377370 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" event={"ID":"20263e06-a85d-4747-94ac-b8ea083c0749","Type":"ContainerDied","Data":"b599bed12c5348208e9e8e4d3e47a10f0b651022d40fc4cfa2d78de701422799"} Mar 18 09:11:53.941574 master-0 kubenswrapper[28766]: W0318 09:11:53.941517 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod034a9024_3d82_4832_b9b3_b61a08718bf8.slice/crio-16a76bed1a3e1a36cb00fe1cc346fd2ecd6c7096ef88a827af343a179caa7740 WatchSource:0}: Error finding container 16a76bed1a3e1a36cb00fe1cc346fd2ecd6c7096ef88a827af343a179caa7740: Status 404 returned error can't find the container with id 16a76bed1a3e1a36cb00fe1cc346fd2ecd6c7096ef88a827af343a179caa7740 Mar 18 09:11:53.954069 master-0 kubenswrapper[28766]: I0318 09:11:53.954018 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn"] Mar 18 09:11:54.386932 master-0 kubenswrapper[28766]: I0318 09:11:54.386865 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" event={"ID":"034a9024-3d82-4832-b9b3-b61a08718bf8","Type":"ContainerStarted","Data":"16a76bed1a3e1a36cb00fe1cc346fd2ecd6c7096ef88a827af343a179caa7740"} Mar 18 09:11:54.389977 master-0 kubenswrapper[28766]: I0318 09:11:54.389864 28766 generic.go:334] "Generic (PLEG): container finished" podID="20263e06-a85d-4747-94ac-b8ea083c0749" containerID="19ab5c67f4b0069893db9e7d581d9c2a37cd17e6cd8a8d513582bf814aff0b07" exitCode=0 Mar 18 09:11:54.389977 master-0 kubenswrapper[28766]: I0318 09:11:54.389901 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" event={"ID":"20263e06-a85d-4747-94ac-b8ea083c0749","Type":"ContainerDied","Data":"19ab5c67f4b0069893db9e7d581d9c2a37cd17e6cd8a8d513582bf814aff0b07"} Mar 18 09:11:55.801450 master-0 kubenswrapper[28766]: I0318 09:11:55.801393 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:55.965873 master-0 kubenswrapper[28766]: I0318 09:11:55.965575 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq78t\" (UniqueName: \"kubernetes.io/projected/20263e06-a85d-4747-94ac-b8ea083c0749-kube-api-access-cq78t\") pod \"20263e06-a85d-4747-94ac-b8ea083c0749\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " Mar 18 09:11:55.965873 master-0 kubenswrapper[28766]: I0318 09:11:55.965655 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-util\") pod \"20263e06-a85d-4747-94ac-b8ea083c0749\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " Mar 18 09:11:55.965873 master-0 kubenswrapper[28766]: I0318 09:11:55.965714 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-bundle\") pod \"20263e06-a85d-4747-94ac-b8ea083c0749\" (UID: \"20263e06-a85d-4747-94ac-b8ea083c0749\") " Mar 18 09:11:55.973554 master-0 kubenswrapper[28766]: I0318 09:11:55.973500 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-bundle" (OuterVolumeSpecName: "bundle") pod "20263e06-a85d-4747-94ac-b8ea083c0749" (UID: "20263e06-a85d-4747-94ac-b8ea083c0749"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:55.991243 master-0 kubenswrapper[28766]: I0318 09:11:55.990181 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20263e06-a85d-4747-94ac-b8ea083c0749-kube-api-access-cq78t" (OuterVolumeSpecName: "kube-api-access-cq78t") pod "20263e06-a85d-4747-94ac-b8ea083c0749" (UID: "20263e06-a85d-4747-94ac-b8ea083c0749"). InnerVolumeSpecName "kube-api-access-cq78t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:11:55.991243 master-0 kubenswrapper[28766]: I0318 09:11:55.990688 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-util" (OuterVolumeSpecName: "util") pod "20263e06-a85d-4747-94ac-b8ea083c0749" (UID: "20263e06-a85d-4747-94ac-b8ea083c0749"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:11:56.069765 master-0 kubenswrapper[28766]: I0318 09:11:56.069593 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq78t\" (UniqueName: \"kubernetes.io/projected/20263e06-a85d-4747-94ac-b8ea083c0749-kube-api-access-cq78t\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:56.070162 master-0 kubenswrapper[28766]: I0318 09:11:56.070144 28766 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-util\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:56.070258 master-0 kubenswrapper[28766]: I0318 09:11:56.070242 28766 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20263e06-a85d-4747-94ac-b8ea083c0749-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:56.441098 master-0 kubenswrapper[28766]: I0318 09:11:56.440891 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" event={"ID":"20263e06-a85d-4747-94ac-b8ea083c0749","Type":"ContainerDied","Data":"9e1553dec1e19028679381f4e38383078fa95a5c9c89446714ba0510cdf9128b"} Mar 18 09:11:56.441098 master-0 kubenswrapper[28766]: I0318 09:11:56.440940 28766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e1553dec1e19028679381f4e38383078fa95a5c9c89446714ba0510cdf9128b" Mar 18 09:11:56.441098 master-0 kubenswrapper[28766]: I0318 09:11:56.440955 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972675wfc" Mar 18 09:11:58.464053 master-0 kubenswrapper[28766]: I0318 09:11:58.463945 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" event={"ID":"034a9024-3d82-4832-b9b3-b61a08718bf8","Type":"ContainerStarted","Data":"bd7d70f8928d256e05da6f7fbd4c42b2656132878d592b32253532edee8cf868"} Mar 18 09:11:58.490876 master-0 kubenswrapper[28766]: I0318 09:11:58.489682 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tgpwn" podStartSLOduration=1.6134329790000002 podStartE2EDuration="5.489664148s" podCreationTimestamp="2026-03-18 09:11:53 +0000 UTC" firstStartedPulling="2026-03-18 09:11:53.949999278 +0000 UTC m=+466.964257944" lastFinishedPulling="2026-03-18 09:11:57.826230447 +0000 UTC m=+470.840489113" observedRunningTime="2026-03-18 09:11:58.48822461 +0000 UTC m=+471.502483276" watchObservedRunningTime="2026-03-18 09:11:58.489664148 +0000 UTC m=+471.503922814" Mar 18 09:12:00.555548 master-0 kubenswrapper[28766]: I0318 09:12:00.555454 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8ds5h"] Mar 18 09:12:00.556085 master-0 kubenswrapper[28766]: E0318 09:12:00.555947 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20263e06-a85d-4747-94ac-b8ea083c0749" containerName="util" Mar 18 09:12:00.556085 master-0 kubenswrapper[28766]: I0318 09:12:00.555967 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20263e06-a85d-4747-94ac-b8ea083c0749" containerName="util" Mar 18 09:12:00.556085 master-0 kubenswrapper[28766]: E0318 09:12:00.555992 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20263e06-a85d-4747-94ac-b8ea083c0749" containerName="extract" Mar 18 09:12:00.556085 master-0 kubenswrapper[28766]: I0318 09:12:00.556004 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20263e06-a85d-4747-94ac-b8ea083c0749" containerName="extract" Mar 18 09:12:00.556085 master-0 kubenswrapper[28766]: E0318 09:12:00.556038 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20263e06-a85d-4747-94ac-b8ea083c0749" containerName="pull" Mar 18 09:12:00.556085 master-0 kubenswrapper[28766]: I0318 09:12:00.556046 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="20263e06-a85d-4747-94ac-b8ea083c0749" containerName="pull" Mar 18 09:12:00.556269 master-0 kubenswrapper[28766]: I0318 09:12:00.556241 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="20263e06-a85d-4747-94ac-b8ea083c0749" containerName="extract" Mar 18 09:12:00.557034 master-0 kubenswrapper[28766]: I0318 09:12:00.556999 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:00.561324 master-0 kubenswrapper[28766]: I0318 09:12:00.561249 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 18 09:12:00.568084 master-0 kubenswrapper[28766]: I0318 09:12:00.568000 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8ds5h"] Mar 18 09:12:00.572129 master-0 kubenswrapper[28766]: I0318 09:12:00.572095 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 18 09:12:00.666761 master-0 kubenswrapper[28766]: I0318 09:12:00.666181 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4101d92-1e4e-48a5-af55-6388661e3800-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8ds5h\" (UID: \"a4101d92-1e4e-48a5-af55-6388661e3800\") " pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:00.666761 master-0 kubenswrapper[28766]: I0318 09:12:00.666249 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s24bp\" (UniqueName: \"kubernetes.io/projected/a4101d92-1e4e-48a5-af55-6388661e3800-kube-api-access-s24bp\") pod \"cert-manager-webhook-6888856db4-8ds5h\" (UID: \"a4101d92-1e4e-48a5-af55-6388661e3800\") " pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:00.767474 master-0 kubenswrapper[28766]: I0318 09:12:00.767402 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4101d92-1e4e-48a5-af55-6388661e3800-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8ds5h\" (UID: \"a4101d92-1e4e-48a5-af55-6388661e3800\") " pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:00.767474 master-0 kubenswrapper[28766]: I0318 09:12:00.767467 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s24bp\" (UniqueName: \"kubernetes.io/projected/a4101d92-1e4e-48a5-af55-6388661e3800-kube-api-access-s24bp\") pod \"cert-manager-webhook-6888856db4-8ds5h\" (UID: \"a4101d92-1e4e-48a5-af55-6388661e3800\") " pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:00.801583 master-0 kubenswrapper[28766]: I0318 09:12:00.801506 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s24bp\" (UniqueName: \"kubernetes.io/projected/a4101d92-1e4e-48a5-af55-6388661e3800-kube-api-access-s24bp\") pod \"cert-manager-webhook-6888856db4-8ds5h\" (UID: \"a4101d92-1e4e-48a5-af55-6388661e3800\") " pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:00.818264 master-0 kubenswrapper[28766]: I0318 09:12:00.818097 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4101d92-1e4e-48a5-af55-6388661e3800-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8ds5h\" (UID: \"a4101d92-1e4e-48a5-af55-6388661e3800\") " pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:00.956781 master-0 kubenswrapper[28766]: I0318 09:12:00.956692 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:01.441298 master-0 kubenswrapper[28766]: I0318 09:12:01.441174 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8ds5h"] Mar 18 09:12:01.450354 master-0 kubenswrapper[28766]: W0318 09:12:01.450306 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4101d92_1e4e_48a5_af55_6388661e3800.slice/crio-9975fb6d75508069339dec9a824f90c8889a21f0cfb598fb8a411b9440890bdb WatchSource:0}: Error finding container 9975fb6d75508069339dec9a824f90c8889a21f0cfb598fb8a411b9440890bdb: Status 404 returned error can't find the container with id 9975fb6d75508069339dec9a824f90c8889a21f0cfb598fb8a411b9440890bdb Mar 18 09:12:01.491915 master-0 kubenswrapper[28766]: I0318 09:12:01.491810 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" event={"ID":"a4101d92-1e4e-48a5-af55-6388661e3800","Type":"ContainerStarted","Data":"9975fb6d75508069339dec9a824f90c8889a21f0cfb598fb8a411b9440890bdb"} Mar 18 09:12:04.630006 master-0 kubenswrapper[28766]: I0318 09:12:04.628204 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-69pmp"] Mar 18 09:12:04.630006 master-0 kubenswrapper[28766]: I0318 09:12:04.629896 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" Mar 18 09:12:04.661925 master-0 kubenswrapper[28766]: I0318 09:12:04.657651 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-69pmp"] Mar 18 09:12:04.745039 master-0 kubenswrapper[28766]: I0318 09:12:04.738653 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw"] Mar 18 09:12:04.745039 master-0 kubenswrapper[28766]: I0318 09:12:04.739535 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw" Mar 18 09:12:04.750951 master-0 kubenswrapper[28766]: I0318 09:12:04.750830 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 18 09:12:04.751139 master-0 kubenswrapper[28766]: I0318 09:12:04.751059 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 18 09:12:04.766592 master-0 kubenswrapper[28766]: I0318 09:12:04.766518 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw"] Mar 18 09:12:04.767497 master-0 kubenswrapper[28766]: I0318 09:12:04.767458 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed34c608-6097-46a6-9539-3308a0526860-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-69pmp\" (UID: \"ed34c608-6097-46a6-9539-3308a0526860\") " pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" Mar 18 09:12:04.767590 master-0 kubenswrapper[28766]: I0318 09:12:04.767556 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vld9b\" (UniqueName: \"kubernetes.io/projected/ed34c608-6097-46a6-9539-3308a0526860-kube-api-access-vld9b\") pod \"cert-manager-cainjector-5545bd876-69pmp\" (UID: \"ed34c608-6097-46a6-9539-3308a0526860\") " pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" Mar 18 09:12:04.869342 master-0 kubenswrapper[28766]: I0318 09:12:04.869216 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7qsv\" (UniqueName: \"kubernetes.io/projected/c41ac234-9a6f-410f-b4f1-1825ada66e14-kube-api-access-x7qsv\") pod \"nmstate-operator-796d4cfff4-p6fqw\" (UID: \"c41ac234-9a6f-410f-b4f1-1825ada66e14\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw" Mar 18 09:12:04.869342 master-0 kubenswrapper[28766]: I0318 09:12:04.869333 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vld9b\" (UniqueName: \"kubernetes.io/projected/ed34c608-6097-46a6-9539-3308a0526860-kube-api-access-vld9b\") pod \"cert-manager-cainjector-5545bd876-69pmp\" (UID: \"ed34c608-6097-46a6-9539-3308a0526860\") " pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" Mar 18 09:12:04.869665 master-0 kubenswrapper[28766]: I0318 09:12:04.869412 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed34c608-6097-46a6-9539-3308a0526860-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-69pmp\" (UID: \"ed34c608-6097-46a6-9539-3308a0526860\") " pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" Mar 18 09:12:04.900747 master-0 kubenswrapper[28766]: I0318 09:12:04.900613 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed34c608-6097-46a6-9539-3308a0526860-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-69pmp\" (UID: \"ed34c608-6097-46a6-9539-3308a0526860\") " pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" Mar 18 09:12:04.905758 master-0 kubenswrapper[28766]: I0318 09:12:04.905608 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vld9b\" (UniqueName: \"kubernetes.io/projected/ed34c608-6097-46a6-9539-3308a0526860-kube-api-access-vld9b\") pod \"cert-manager-cainjector-5545bd876-69pmp\" (UID: \"ed34c608-6097-46a6-9539-3308a0526860\") " pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" Mar 18 09:12:04.970830 master-0 kubenswrapper[28766]: I0318 09:12:04.970768 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7qsv\" (UniqueName: \"kubernetes.io/projected/c41ac234-9a6f-410f-b4f1-1825ada66e14-kube-api-access-x7qsv\") pod \"nmstate-operator-796d4cfff4-p6fqw\" (UID: \"c41ac234-9a6f-410f-b4f1-1825ada66e14\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw" Mar 18 09:12:04.992152 master-0 kubenswrapper[28766]: I0318 09:12:04.992085 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7qsv\" (UniqueName: \"kubernetes.io/projected/c41ac234-9a6f-410f-b4f1-1825ada66e14-kube-api-access-x7qsv\") pod \"nmstate-operator-796d4cfff4-p6fqw\" (UID: \"c41ac234-9a6f-410f-b4f1-1825ada66e14\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw" Mar 18 09:12:04.998692 master-0 kubenswrapper[28766]: I0318 09:12:04.998650 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" Mar 18 09:12:05.068470 master-0 kubenswrapper[28766]: I0318 09:12:05.068415 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw" Mar 18 09:12:05.438626 master-0 kubenswrapper[28766]: I0318 09:12:05.437890 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-69pmp"] Mar 18 09:12:05.442931 master-0 kubenswrapper[28766]: W0318 09:12:05.442671 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded34c608_6097_46a6_9539_3308a0526860.slice/crio-b0781f5ad4081587cded5c785ef232bf503c248cf9b7cb1d0151b25460c17009 WatchSource:0}: Error finding container b0781f5ad4081587cded5c785ef232bf503c248cf9b7cb1d0151b25460c17009: Status 404 returned error can't find the container with id b0781f5ad4081587cded5c785ef232bf503c248cf9b7cb1d0151b25460c17009 Mar 18 09:12:05.560142 master-0 kubenswrapper[28766]: I0318 09:12:05.560032 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw"] Mar 18 09:12:05.570167 master-0 kubenswrapper[28766]: I0318 09:12:05.570124 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" event={"ID":"ed34c608-6097-46a6-9539-3308a0526860","Type":"ContainerStarted","Data":"b0781f5ad4081587cded5c785ef232bf503c248cf9b7cb1d0151b25460c17009"} Mar 18 09:12:08.224181 master-0 kubenswrapper[28766]: W0318 09:12:08.224103 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc41ac234_9a6f_410f_b4f1_1825ada66e14.slice/crio-4ef5cc342cdc008a2407db1dd4d03d3cc8c2f07e1d565ff69b5a86d7bdf95d4c WatchSource:0}: Error finding container 4ef5cc342cdc008a2407db1dd4d03d3cc8c2f07e1d565ff69b5a86d7bdf95d4c: Status 404 returned error can't find the container with id 4ef5cc342cdc008a2407db1dd4d03d3cc8c2f07e1d565ff69b5a86d7bdf95d4c Mar 18 09:12:08.614991 master-0 kubenswrapper[28766]: I0318 09:12:08.614909 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw" event={"ID":"c41ac234-9a6f-410f-b4f1-1825ada66e14","Type":"ContainerStarted","Data":"4ef5cc342cdc008a2407db1dd4d03d3cc8c2f07e1d565ff69b5a86d7bdf95d4c"} Mar 18 09:12:09.624019 master-0 kubenswrapper[28766]: I0318 09:12:09.623959 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" event={"ID":"ed34c608-6097-46a6-9539-3308a0526860","Type":"ContainerStarted","Data":"5dce73f7f716046c4c7fe3d64150d88f870009e44b92047c2a2a1df9f2351c84"} Mar 18 09:12:09.626546 master-0 kubenswrapper[28766]: I0318 09:12:09.626462 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" event={"ID":"a4101d92-1e4e-48a5-af55-6388661e3800","Type":"ContainerStarted","Data":"fa6750de5c2d9d06bf3a8c29c00e6b3db597329d4daaf24f5a7d61685fe86655"} Mar 18 09:12:09.626766 master-0 kubenswrapper[28766]: I0318 09:12:09.626711 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:09.662219 master-0 kubenswrapper[28766]: I0318 09:12:09.662103 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-69pmp" podStartSLOduration=2.774709811 podStartE2EDuration="5.662074176s" podCreationTimestamp="2026-03-18 09:12:04 +0000 UTC" firstStartedPulling="2026-03-18 09:12:05.445648062 +0000 UTC m=+478.459906728" lastFinishedPulling="2026-03-18 09:12:08.333012427 +0000 UTC m=+481.347271093" observedRunningTime="2026-03-18 09:12:09.647424356 +0000 UTC m=+482.661683022" watchObservedRunningTime="2026-03-18 09:12:09.662074176 +0000 UTC m=+482.676332852" Mar 18 09:12:09.696556 master-0 kubenswrapper[28766]: I0318 09:12:09.696402 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" podStartSLOduration=2.818061107 podStartE2EDuration="9.696373856s" podCreationTimestamp="2026-03-18 09:12:00 +0000 UTC" firstStartedPulling="2026-03-18 09:12:01.452321986 +0000 UTC m=+474.466580652" lastFinishedPulling="2026-03-18 09:12:08.330634735 +0000 UTC m=+481.344893401" observedRunningTime="2026-03-18 09:12:09.691896399 +0000 UTC m=+482.706155065" watchObservedRunningTime="2026-03-18 09:12:09.696373856 +0000 UTC m=+482.710632532" Mar 18 09:12:11.780280 master-0 kubenswrapper[28766]: I0318 09:12:11.780185 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-65f5d58555-j282b"] Mar 18 09:12:11.782522 master-0 kubenswrapper[28766]: I0318 09:12:11.782081 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:11.785558 master-0 kubenswrapper[28766]: I0318 09:12:11.784967 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 18 09:12:11.785558 master-0 kubenswrapper[28766]: I0318 09:12:11.785504 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 18 09:12:11.785715 master-0 kubenswrapper[28766]: I0318 09:12:11.785691 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 18 09:12:11.802932 master-0 kubenswrapper[28766]: I0318 09:12:11.801277 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 18 09:12:11.820836 master-0 kubenswrapper[28766]: I0318 09:12:11.820743 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65f5d58555-j282b"] Mar 18 09:12:11.869442 master-0 kubenswrapper[28766]: I0318 09:12:11.868047 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/27eeeb04-faa9-4d56-81fa-a890a202cdd4-apiservice-cert\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:11.869442 master-0 kubenswrapper[28766]: I0318 09:12:11.868171 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27eeeb04-faa9-4d56-81fa-a890a202cdd4-webhook-cert\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:11.869442 master-0 kubenswrapper[28766]: I0318 09:12:11.868220 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgkqk\" (UniqueName: \"kubernetes.io/projected/27eeeb04-faa9-4d56-81fa-a890a202cdd4-kube-api-access-jgkqk\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:11.972405 master-0 kubenswrapper[28766]: I0318 09:12:11.971350 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgkqk\" (UniqueName: \"kubernetes.io/projected/27eeeb04-faa9-4d56-81fa-a890a202cdd4-kube-api-access-jgkqk\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:11.972405 master-0 kubenswrapper[28766]: I0318 09:12:11.971435 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/27eeeb04-faa9-4d56-81fa-a890a202cdd4-apiservice-cert\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:11.972405 master-0 kubenswrapper[28766]: I0318 09:12:11.971551 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27eeeb04-faa9-4d56-81fa-a890a202cdd4-webhook-cert\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:11.977544 master-0 kubenswrapper[28766]: I0318 09:12:11.977505 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27eeeb04-faa9-4d56-81fa-a890a202cdd4-webhook-cert\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:11.978079 master-0 kubenswrapper[28766]: I0318 09:12:11.978049 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/27eeeb04-faa9-4d56-81fa-a890a202cdd4-apiservice-cert\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:12.021116 master-0 kubenswrapper[28766]: I0318 09:12:12.021067 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgkqk\" (UniqueName: \"kubernetes.io/projected/27eeeb04-faa9-4d56-81fa-a890a202cdd4-kube-api-access-jgkqk\") pod \"metallb-operator-controller-manager-65f5d58555-j282b\" (UID: \"27eeeb04-faa9-4d56-81fa-a890a202cdd4\") " pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:12.256097 master-0 kubenswrapper[28766]: I0318 09:12:12.254154 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:12.373880 master-0 kubenswrapper[28766]: I0318 09:12:12.369041 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k"] Mar 18 09:12:12.373880 master-0 kubenswrapper[28766]: I0318 09:12:12.369994 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.374126 master-0 kubenswrapper[28766]: I0318 09:12:12.373964 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 09:12:12.379569 master-0 kubenswrapper[28766]: I0318 09:12:12.378830 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 18 09:12:12.398987 master-0 kubenswrapper[28766]: I0318 09:12:12.398938 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k"] Mar 18 09:12:12.486862 master-0 kubenswrapper[28766]: I0318 09:12:12.486784 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-apiservice-cert\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.487046 master-0 kubenswrapper[28766]: I0318 09:12:12.486862 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hxw\" (UniqueName: \"kubernetes.io/projected/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-kube-api-access-24hxw\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.487046 master-0 kubenswrapper[28766]: I0318 09:12:12.486906 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-webhook-cert\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.606953 master-0 kubenswrapper[28766]: I0318 09:12:12.595736 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-apiservice-cert\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.606953 master-0 kubenswrapper[28766]: I0318 09:12:12.595805 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hxw\" (UniqueName: \"kubernetes.io/projected/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-kube-api-access-24hxw\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.606953 master-0 kubenswrapper[28766]: I0318 09:12:12.595867 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-webhook-cert\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.606953 master-0 kubenswrapper[28766]: I0318 09:12:12.599242 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-apiservice-cert\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.612208 master-0 kubenswrapper[28766]: I0318 09:12:12.607948 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-webhook-cert\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.621874 master-0 kubenswrapper[28766]: I0318 09:12:12.621446 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hxw\" (UniqueName: \"kubernetes.io/projected/f8b3af47-0f7b-422a-905a-0e3e139e2f7e-kube-api-access-24hxw\") pod \"metallb-operator-webhook-server-88b68f8d8-w9g9k\" (UID: \"f8b3af47-0f7b-422a-905a-0e3e139e2f7e\") " pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.710714 master-0 kubenswrapper[28766]: I0318 09:12:12.710398 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:12.781689 master-0 kubenswrapper[28766]: I0318 09:12:12.781599 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65f5d58555-j282b"] Mar 18 09:12:13.677865 master-0 kubenswrapper[28766]: I0318 09:12:13.677787 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" event={"ID":"27eeeb04-faa9-4d56-81fa-a890a202cdd4","Type":"ContainerStarted","Data":"26f4cb7d476c0e1eddfeea84b11df45f797f4ee1469781b2816f11f1537954bd"} Mar 18 09:12:13.755873 master-0 kubenswrapper[28766]: I0318 09:12:13.754488 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-gn68n"] Mar 18 09:12:13.762879 master-0 kubenswrapper[28766]: I0318 09:12:13.760289 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-gn68n" Mar 18 09:12:13.880906 master-0 kubenswrapper[28766]: I0318 09:12:13.875717 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-gn68n"] Mar 18 09:12:13.962066 master-0 kubenswrapper[28766]: I0318 09:12:13.961803 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/427e8f18-69c0-461d-8322-cb64dd0ad33f-bound-sa-token\") pod \"cert-manager-545d4d4674-gn68n\" (UID: \"427e8f18-69c0-461d-8322-cb64dd0ad33f\") " pod="cert-manager/cert-manager-545d4d4674-gn68n" Mar 18 09:12:13.962066 master-0 kubenswrapper[28766]: I0318 09:12:13.961920 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bf28\" (UniqueName: \"kubernetes.io/projected/427e8f18-69c0-461d-8322-cb64dd0ad33f-kube-api-access-7bf28\") pod \"cert-manager-545d4d4674-gn68n\" (UID: \"427e8f18-69c0-461d-8322-cb64dd0ad33f\") " pod="cert-manager/cert-manager-545d4d4674-gn68n" Mar 18 09:12:14.070805 master-0 kubenswrapper[28766]: I0318 09:12:14.070594 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/427e8f18-69c0-461d-8322-cb64dd0ad33f-bound-sa-token\") pod \"cert-manager-545d4d4674-gn68n\" (UID: \"427e8f18-69c0-461d-8322-cb64dd0ad33f\") " pod="cert-manager/cert-manager-545d4d4674-gn68n" Mar 18 09:12:14.071636 master-0 kubenswrapper[28766]: I0318 09:12:14.071616 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bf28\" (UniqueName: \"kubernetes.io/projected/427e8f18-69c0-461d-8322-cb64dd0ad33f-kube-api-access-7bf28\") pod \"cert-manager-545d4d4674-gn68n\" (UID: \"427e8f18-69c0-461d-8322-cb64dd0ad33f\") " pod="cert-manager/cert-manager-545d4d4674-gn68n" Mar 18 09:12:14.156446 master-0 kubenswrapper[28766]: I0318 09:12:14.156409 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/427e8f18-69c0-461d-8322-cb64dd0ad33f-bound-sa-token\") pod \"cert-manager-545d4d4674-gn68n\" (UID: \"427e8f18-69c0-461d-8322-cb64dd0ad33f\") " pod="cert-manager/cert-manager-545d4d4674-gn68n" Mar 18 09:12:14.222604 master-0 kubenswrapper[28766]: I0318 09:12:14.222569 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bf28\" (UniqueName: \"kubernetes.io/projected/427e8f18-69c0-461d-8322-cb64dd0ad33f-kube-api-access-7bf28\") pod \"cert-manager-545d4d4674-gn68n\" (UID: \"427e8f18-69c0-461d-8322-cb64dd0ad33f\") " pod="cert-manager/cert-manager-545d4d4674-gn68n" Mar 18 09:12:14.263706 master-0 kubenswrapper[28766]: I0318 09:12:14.263658 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k"] Mar 18 09:12:14.418079 master-0 kubenswrapper[28766]: I0318 09:12:14.417937 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-gn68n" Mar 18 09:12:14.785016 master-0 kubenswrapper[28766]: I0318 09:12:14.784949 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw" event={"ID":"c41ac234-9a6f-410f-b4f1-1825ada66e14","Type":"ContainerStarted","Data":"ce802f686cd31ee1fe054bd02383c0670f12be0e9f80cd0ba46526eef626290e"} Mar 18 09:12:14.796623 master-0 kubenswrapper[28766]: I0318 09:12:14.796367 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" event={"ID":"f8b3af47-0f7b-422a-905a-0e3e139e2f7e","Type":"ContainerStarted","Data":"18d58970115acc9cd35aba13f2c516f270f0fed3be1354a81fe016636ff032be"} Mar 18 09:12:14.847234 master-0 kubenswrapper[28766]: I0318 09:12:14.845449 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p6fqw" podStartSLOduration=5.33151963 podStartE2EDuration="10.845425624s" podCreationTimestamp="2026-03-18 09:12:04 +0000 UTC" firstStartedPulling="2026-03-18 09:12:08.23021377 +0000 UTC m=+481.244472436" lastFinishedPulling="2026-03-18 09:12:13.744119774 +0000 UTC m=+486.758378430" observedRunningTime="2026-03-18 09:12:14.83755766 +0000 UTC m=+487.851816326" watchObservedRunningTime="2026-03-18 09:12:14.845425624 +0000 UTC m=+487.859684300" Mar 18 09:12:14.969028 master-0 kubenswrapper[28766]: I0318 09:12:14.968946 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-gn68n"] Mar 18 09:12:14.996881 master-0 kubenswrapper[28766]: W0318 09:12:14.995013 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod427e8f18_69c0_461d_8322_cb64dd0ad33f.slice/crio-d3373243f0051c61d30c3ac49bd2bc4d9a804c57938e0c6cebb304f46a88ba16 WatchSource:0}: Error finding container d3373243f0051c61d30c3ac49bd2bc4d9a804c57938e0c6cebb304f46a88ba16: Status 404 returned error can't find the container with id d3373243f0051c61d30c3ac49bd2bc4d9a804c57938e0c6cebb304f46a88ba16 Mar 18 09:12:15.820885 master-0 kubenswrapper[28766]: I0318 09:12:15.817634 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-gn68n" event={"ID":"427e8f18-69c0-461d-8322-cb64dd0ad33f","Type":"ContainerStarted","Data":"810d3a49f08e9a2d4631722e93ceba2c314a3dab04d3fd9023e2c9de9e4f8d75"} Mar 18 09:12:15.820885 master-0 kubenswrapper[28766]: I0318 09:12:15.817682 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-gn68n" event={"ID":"427e8f18-69c0-461d-8322-cb64dd0ad33f","Type":"ContainerStarted","Data":"d3373243f0051c61d30c3ac49bd2bc4d9a804c57938e0c6cebb304f46a88ba16"} Mar 18 09:12:15.865594 master-0 kubenswrapper[28766]: I0318 09:12:15.865294 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-gn68n" podStartSLOduration=2.865261721 podStartE2EDuration="2.865261721s" podCreationTimestamp="2026-03-18 09:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:12:15.851964185 +0000 UTC m=+488.866222851" watchObservedRunningTime="2026-03-18 09:12:15.865261721 +0000 UTC m=+488.879520387" Mar 18 09:12:15.964788 master-0 kubenswrapper[28766]: I0318 09:12:15.964719 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-8ds5h" Mar 18 09:12:19.740877 master-0 kubenswrapper[28766]: I0318 09:12:19.739751 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2"] Mar 18 09:12:19.740877 master-0 kubenswrapper[28766]: I0318 09:12:19.740818 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2" Mar 18 09:12:19.755924 master-0 kubenswrapper[28766]: I0318 09:12:19.753436 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 18 09:12:19.760882 master-0 kubenswrapper[28766]: I0318 09:12:19.760121 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 18 09:12:19.798873 master-0 kubenswrapper[28766]: I0318 09:12:19.797924 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2"] Mar 18 09:12:19.846927 master-0 kubenswrapper[28766]: I0318 09:12:19.845435 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq7sg\" (UniqueName: \"kubernetes.io/projected/530e8baf-e772-4beb-9e9c-62026f58fe64-kube-api-access-bq7sg\") pod \"obo-prometheus-operator-8ff7d675-s8bf2\" (UID: \"530e8baf-e772-4beb-9e9c-62026f58fe64\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2" Mar 18 09:12:19.927392 master-0 kubenswrapper[28766]: I0318 09:12:19.927325 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" event={"ID":"27eeeb04-faa9-4d56-81fa-a890a202cdd4","Type":"ContainerStarted","Data":"632f0bf6753d6c1fe6b350a874bb58cb5e4b5a0a599cbb1564ee04ce2ee6cd41"} Mar 18 09:12:19.928101 master-0 kubenswrapper[28766]: I0318 09:12:19.928012 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:12:19.949004 master-0 kubenswrapper[28766]: I0318 09:12:19.947743 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq7sg\" (UniqueName: \"kubernetes.io/projected/530e8baf-e772-4beb-9e9c-62026f58fe64-kube-api-access-bq7sg\") pod \"obo-prometheus-operator-8ff7d675-s8bf2\" (UID: \"530e8baf-e772-4beb-9e9c-62026f58fe64\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2" Mar 18 09:12:19.969487 master-0 kubenswrapper[28766]: I0318 09:12:19.969369 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" podStartSLOduration=3.6560150289999997 podStartE2EDuration="8.96933249s" podCreationTimestamp="2026-03-18 09:12:11 +0000 UTC" firstStartedPulling="2026-03-18 09:12:13.558429936 +0000 UTC m=+486.572688602" lastFinishedPulling="2026-03-18 09:12:18.871747397 +0000 UTC m=+491.886006063" observedRunningTime="2026-03-18 09:12:19.965274774 +0000 UTC m=+492.979533440" watchObservedRunningTime="2026-03-18 09:12:19.96933249 +0000 UTC m=+492.983591156" Mar 18 09:12:19.974725 master-0 kubenswrapper[28766]: I0318 09:12:19.974681 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq7sg\" (UniqueName: \"kubernetes.io/projected/530e8baf-e772-4beb-9e9c-62026f58fe64-kube-api-access-bq7sg\") pod \"obo-prometheus-operator-8ff7d675-s8bf2\" (UID: \"530e8baf-e772-4beb-9e9c-62026f58fe64\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2" Mar 18 09:12:20.110283 master-0 kubenswrapper[28766]: I0318 09:12:20.110149 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2" Mar 18 09:12:20.358842 master-0 kubenswrapper[28766]: I0318 09:12:20.358767 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz"] Mar 18 09:12:20.360181 master-0 kubenswrapper[28766]: I0318 09:12:20.360152 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" Mar 18 09:12:20.364546 master-0 kubenswrapper[28766]: I0318 09:12:20.364446 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 18 09:12:20.394095 master-0 kubenswrapper[28766]: I0318 09:12:20.392074 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb"] Mar 18 09:12:20.394095 master-0 kubenswrapper[28766]: I0318 09:12:20.393288 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" Mar 18 09:12:20.408323 master-0 kubenswrapper[28766]: I0318 09:12:20.402348 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz"] Mar 18 09:12:20.446671 master-0 kubenswrapper[28766]: I0318 09:12:20.446338 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb"] Mar 18 09:12:20.465289 master-0 kubenswrapper[28766]: I0318 09:12:20.462444 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51a5655a-e87e-4e56-963d-83bdee4a2124-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz\" (UID: \"51a5655a-e87e-4e56-963d-83bdee4a2124\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" Mar 18 09:12:20.465289 master-0 kubenswrapper[28766]: I0318 09:12:20.462514 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a5655a-e87e-4e56-963d-83bdee4a2124-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz\" (UID: \"51a5655a-e87e-4e56-963d-83bdee4a2124\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" Mar 18 09:12:20.565020 master-0 kubenswrapper[28766]: I0318 09:12:20.564835 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a5655a-e87e-4e56-963d-83bdee4a2124-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz\" (UID: \"51a5655a-e87e-4e56-963d-83bdee4a2124\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" Mar 18 09:12:20.565020 master-0 kubenswrapper[28766]: I0318 09:12:20.564942 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cc17895-7455-4175-b335-898329eb83af-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb\" (UID: \"6cc17895-7455-4175-b335-898329eb83af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" Mar 18 09:12:20.565020 master-0 kubenswrapper[28766]: I0318 09:12:20.565030 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51a5655a-e87e-4e56-963d-83bdee4a2124-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz\" (UID: \"51a5655a-e87e-4e56-963d-83bdee4a2124\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" Mar 18 09:12:20.565292 master-0 kubenswrapper[28766]: I0318 09:12:20.565064 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cc17895-7455-4175-b335-898329eb83af-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb\" (UID: \"6cc17895-7455-4175-b335-898329eb83af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" Mar 18 09:12:20.569215 master-0 kubenswrapper[28766]: I0318 09:12:20.569158 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a5655a-e87e-4e56-963d-83bdee4a2124-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz\" (UID: \"51a5655a-e87e-4e56-963d-83bdee4a2124\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" Mar 18 09:12:20.570735 master-0 kubenswrapper[28766]: I0318 09:12:20.570698 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51a5655a-e87e-4e56-963d-83bdee4a2124-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz\" (UID: \"51a5655a-e87e-4e56-963d-83bdee4a2124\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" Mar 18 09:12:20.667135 master-0 kubenswrapper[28766]: I0318 09:12:20.666962 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cc17895-7455-4175-b335-898329eb83af-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb\" (UID: \"6cc17895-7455-4175-b335-898329eb83af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" Mar 18 09:12:20.667135 master-0 kubenswrapper[28766]: I0318 09:12:20.667105 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cc17895-7455-4175-b335-898329eb83af-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb\" (UID: \"6cc17895-7455-4175-b335-898329eb83af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" Mar 18 09:12:20.670583 master-0 kubenswrapper[28766]: I0318 09:12:20.670560 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cc17895-7455-4175-b335-898329eb83af-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb\" (UID: \"6cc17895-7455-4175-b335-898329eb83af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" Mar 18 09:12:20.680887 master-0 kubenswrapper[28766]: I0318 09:12:20.678617 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cc17895-7455-4175-b335-898329eb83af-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb\" (UID: \"6cc17895-7455-4175-b335-898329eb83af\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" Mar 18 09:12:20.689880 master-0 kubenswrapper[28766]: I0318 09:12:20.687762 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" Mar 18 09:12:20.749236 master-0 kubenswrapper[28766]: I0318 09:12:20.749180 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" Mar 18 09:12:20.841279 master-0 kubenswrapper[28766]: I0318 09:12:20.841226 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-gwzhl"] Mar 18 09:12:20.842233 master-0 kubenswrapper[28766]: I0318 09:12:20.842204 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:20.848673 master-0 kubenswrapper[28766]: I0318 09:12:20.848640 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 18 09:12:20.863481 master-0 kubenswrapper[28766]: I0318 09:12:20.863425 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-gwzhl"] Mar 18 09:12:20.974022 master-0 kubenswrapper[28766]: I0318 09:12:20.973541 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd2nw\" (UniqueName: \"kubernetes.io/projected/76c81539-3333-4c7d-8dc0-5168188d910f-kube-api-access-sd2nw\") pod \"observability-operator-6dd7dd855f-gwzhl\" (UID: \"76c81539-3333-4c7d-8dc0-5168188d910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:20.974022 master-0 kubenswrapper[28766]: I0318 09:12:20.973737 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/76c81539-3333-4c7d-8dc0-5168188d910f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-gwzhl\" (UID: \"76c81539-3333-4c7d-8dc0-5168188d910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:21.076901 master-0 kubenswrapper[28766]: I0318 09:12:21.076016 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/76c81539-3333-4c7d-8dc0-5168188d910f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-gwzhl\" (UID: \"76c81539-3333-4c7d-8dc0-5168188d910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:21.076901 master-0 kubenswrapper[28766]: I0318 09:12:21.076087 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd2nw\" (UniqueName: \"kubernetes.io/projected/76c81539-3333-4c7d-8dc0-5168188d910f-kube-api-access-sd2nw\") pod \"observability-operator-6dd7dd855f-gwzhl\" (UID: \"76c81539-3333-4c7d-8dc0-5168188d910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:21.080802 master-0 kubenswrapper[28766]: I0318 09:12:21.080769 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/76c81539-3333-4c7d-8dc0-5168188d910f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-gwzhl\" (UID: \"76c81539-3333-4c7d-8dc0-5168188d910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:21.128882 master-0 kubenswrapper[28766]: I0318 09:12:21.125697 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd2nw\" (UniqueName: \"kubernetes.io/projected/76c81539-3333-4c7d-8dc0-5168188d910f-kube-api-access-sd2nw\") pod \"observability-operator-6dd7dd855f-gwzhl\" (UID: \"76c81539-3333-4c7d-8dc0-5168188d910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:21.168138 master-0 kubenswrapper[28766]: I0318 09:12:21.168072 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:21.331883 master-0 kubenswrapper[28766]: I0318 09:12:21.327836 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-69f4f7555f-6tjsm"] Mar 18 09:12:21.331883 master-0 kubenswrapper[28766]: I0318 09:12:21.328837 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.332311 master-0 kubenswrapper[28766]: I0318 09:12:21.332081 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 18 09:12:21.354388 master-0 kubenswrapper[28766]: I0318 09:12:21.354308 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-69f4f7555f-6tjsm"] Mar 18 09:12:21.489771 master-0 kubenswrapper[28766]: I0318 09:12:21.489686 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7184374-f735-4910-b013-4248e1c24f8a-openshift-service-ca\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.489771 master-0 kubenswrapper[28766]: I0318 09:12:21.489772 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7184374-f735-4910-b013-4248e1c24f8a-webhook-cert\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.490717 master-0 kubenswrapper[28766]: I0318 09:12:21.489875 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmq4d\" (UniqueName: \"kubernetes.io/projected/e7184374-f735-4910-b013-4248e1c24f8a-kube-api-access-zmq4d\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.490717 master-0 kubenswrapper[28766]: I0318 09:12:21.489956 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7184374-f735-4910-b013-4248e1c24f8a-apiservice-cert\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.591956 master-0 kubenswrapper[28766]: I0318 09:12:21.591773 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7184374-f735-4910-b013-4248e1c24f8a-apiservice-cert\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.592151 master-0 kubenswrapper[28766]: I0318 09:12:21.592043 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7184374-f735-4910-b013-4248e1c24f8a-openshift-service-ca\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.592207 master-0 kubenswrapper[28766]: I0318 09:12:21.592153 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7184374-f735-4910-b013-4248e1c24f8a-webhook-cert\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.592343 master-0 kubenswrapper[28766]: I0318 09:12:21.592303 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmq4d\" (UniqueName: \"kubernetes.io/projected/e7184374-f735-4910-b013-4248e1c24f8a-kube-api-access-zmq4d\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.593053 master-0 kubenswrapper[28766]: I0318 09:12:21.593015 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7184374-f735-4910-b013-4248e1c24f8a-openshift-service-ca\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.608026 master-0 kubenswrapper[28766]: I0318 09:12:21.596136 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7184374-f735-4910-b013-4248e1c24f8a-webhook-cert\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.608026 master-0 kubenswrapper[28766]: I0318 09:12:21.598764 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7184374-f735-4910-b013-4248e1c24f8a-apiservice-cert\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.612244 master-0 kubenswrapper[28766]: I0318 09:12:21.612162 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmq4d\" (UniqueName: \"kubernetes.io/projected/e7184374-f735-4910-b013-4248e1c24f8a-kube-api-access-zmq4d\") pod \"perses-operator-69f4f7555f-6tjsm\" (UID: \"e7184374-f735-4910-b013-4248e1c24f8a\") " pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:21.673283 master-0 kubenswrapper[28766]: I0318 09:12:21.671378 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:23.855957 master-0 kubenswrapper[28766]: I0318 09:12:23.855899 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz"] Mar 18 09:12:23.994326 master-0 kubenswrapper[28766]: I0318 09:12:23.994238 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" event={"ID":"51a5655a-e87e-4e56-963d-83bdee4a2124","Type":"ContainerStarted","Data":"3e1cad7d4e9fa140e71eefa00b0eb57d430673613e141570f1d6c0503eeaf4a4"} Mar 18 09:12:23.997520 master-0 kubenswrapper[28766]: I0318 09:12:23.997465 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" event={"ID":"f8b3af47-0f7b-422a-905a-0e3e139e2f7e","Type":"ContainerStarted","Data":"a9b52d01c9f11a68f1aef35be408aa3c7d8810319bf105abe2e805a5bc637562"} Mar 18 09:12:23.999536 master-0 kubenswrapper[28766]: I0318 09:12:23.999480 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:24.057583 master-0 kubenswrapper[28766]: I0318 09:12:24.056951 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" podStartSLOduration=2.974653587 podStartE2EDuration="12.056927291s" podCreationTimestamp="2026-03-18 09:12:12 +0000 UTC" firstStartedPulling="2026-03-18 09:12:14.271866245 +0000 UTC m=+487.286124901" lastFinishedPulling="2026-03-18 09:12:23.354139949 +0000 UTC m=+496.368398605" observedRunningTime="2026-03-18 09:12:24.048729709 +0000 UTC m=+497.062988375" watchObservedRunningTime="2026-03-18 09:12:24.056927291 +0000 UTC m=+497.071185977" Mar 18 09:12:24.159958 master-0 kubenswrapper[28766]: W0318 09:12:24.159892 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cc17895_7455_4175_b335_898329eb83af.slice/crio-af7aeccbc61c59c5484a269e0045a321c1a20a2920763942baa718052c12309e WatchSource:0}: Error finding container af7aeccbc61c59c5484a269e0045a321c1a20a2920763942baa718052c12309e: Status 404 returned error can't find the container with id af7aeccbc61c59c5484a269e0045a321c1a20a2920763942baa718052c12309e Mar 18 09:12:24.165048 master-0 kubenswrapper[28766]: I0318 09:12:24.164510 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb"] Mar 18 09:12:24.173036 master-0 kubenswrapper[28766]: I0318 09:12:24.172983 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-gwzhl"] Mar 18 09:12:24.179043 master-0 kubenswrapper[28766]: I0318 09:12:24.178236 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-69f4f7555f-6tjsm"] Mar 18 09:12:24.184119 master-0 kubenswrapper[28766]: I0318 09:12:24.183627 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2"] Mar 18 09:12:25.014048 master-0 kubenswrapper[28766]: I0318 09:12:25.013135 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" event={"ID":"6cc17895-7455-4175-b335-898329eb83af","Type":"ContainerStarted","Data":"af7aeccbc61c59c5484a269e0045a321c1a20a2920763942baa718052c12309e"} Mar 18 09:12:25.016314 master-0 kubenswrapper[28766]: I0318 09:12:25.014936 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" event={"ID":"e7184374-f735-4910-b013-4248e1c24f8a","Type":"ContainerStarted","Data":"d23392266fe7ae1b46430d509c6340f7d12005e51c05d86152d66fe49413a8d7"} Mar 18 09:12:25.017267 master-0 kubenswrapper[28766]: I0318 09:12:25.017202 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2" event={"ID":"530e8baf-e772-4beb-9e9c-62026f58fe64","Type":"ContainerStarted","Data":"ea9989fd6249f265fddbfbe7716f04fc642438ff8400430aeed39cafa5c85847"} Mar 18 09:12:25.018867 master-0 kubenswrapper[28766]: I0318 09:12:25.018809 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" event={"ID":"76c81539-3333-4c7d-8dc0-5168188d910f","Type":"ContainerStarted","Data":"4062da867f78428b6eeab6a03c2ede2c3ef1d3a7fede6058c606b2f4ab11a12e"} Mar 18 09:12:38.263214 master-0 kubenswrapper[28766]: I0318 09:12:38.263138 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" event={"ID":"76c81539-3333-4c7d-8dc0-5168188d910f","Type":"ContainerStarted","Data":"649506b96117d9436dce915fae112b406d2887fb49a688dda995ad04e9258d2d"} Mar 18 09:12:38.264065 master-0 kubenswrapper[28766]: I0318 09:12:38.263373 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:38.265806 master-0 kubenswrapper[28766]: I0318 09:12:38.265749 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" event={"ID":"6cc17895-7455-4175-b335-898329eb83af","Type":"ContainerStarted","Data":"3f8640c81aecf3f033002afed3d9d2199a973b7bce42b04ab6f96d139f9cb1d3"} Mar 18 09:12:38.268218 master-0 kubenswrapper[28766]: I0318 09:12:38.268169 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" Mar 18 09:12:38.268387 master-0 kubenswrapper[28766]: I0318 09:12:38.268322 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" event={"ID":"e7184374-f735-4910-b013-4248e1c24f8a","Type":"ContainerStarted","Data":"1a9f761dd4e83ae876514f6a9d3d911cccfff081f664520631bce4d2d1224315"} Mar 18 09:12:38.269121 master-0 kubenswrapper[28766]: I0318 09:12:38.269077 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:38.270769 master-0 kubenswrapper[28766]: I0318 09:12:38.270721 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2" event={"ID":"530e8baf-e772-4beb-9e9c-62026f58fe64","Type":"ContainerStarted","Data":"1e4c7df4f4301d12635bfb0a2bfb1808e08a5d4806ef9a62c41252b2059a454e"} Mar 18 09:12:38.273313 master-0 kubenswrapper[28766]: I0318 09:12:38.273243 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" event={"ID":"51a5655a-e87e-4e56-963d-83bdee4a2124","Type":"ContainerStarted","Data":"60af95f782ef40a683bbd78213bd6cc9135933d156fd74e854657c79d1c51792"} Mar 18 09:12:38.317674 master-0 kubenswrapper[28766]: I0318 09:12:38.317580 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-gwzhl" podStartSLOduration=5.360981232 podStartE2EDuration="18.317562935s" podCreationTimestamp="2026-03-18 09:12:20 +0000 UTC" firstStartedPulling="2026-03-18 09:12:24.179789589 +0000 UTC m=+497.194048255" lastFinishedPulling="2026-03-18 09:12:37.136371292 +0000 UTC m=+510.150629958" observedRunningTime="2026-03-18 09:12:38.311496358 +0000 UTC m=+511.325755024" watchObservedRunningTime="2026-03-18 09:12:38.317562935 +0000 UTC m=+511.331821601" Mar 18 09:12:38.357921 master-0 kubenswrapper[28766]: I0318 09:12:38.357827 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb" podStartSLOduration=5.395898728 podStartE2EDuration="18.357808829s" podCreationTimestamp="2026-03-18 09:12:20 +0000 UTC" firstStartedPulling="2026-03-18 09:12:24.168886756 +0000 UTC m=+497.183145422" lastFinishedPulling="2026-03-18 09:12:37.130796857 +0000 UTC m=+510.145055523" observedRunningTime="2026-03-18 09:12:38.355292584 +0000 UTC m=+511.369551250" watchObservedRunningTime="2026-03-18 09:12:38.357808829 +0000 UTC m=+511.372067495" Mar 18 09:12:38.400884 master-0 kubenswrapper[28766]: I0318 09:12:38.400215 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-s8bf2" podStartSLOduration=6.462172897 podStartE2EDuration="19.400197649s" podCreationTimestamp="2026-03-18 09:12:19 +0000 UTC" firstStartedPulling="2026-03-18 09:12:24.191496682 +0000 UTC m=+497.205755348" lastFinishedPulling="2026-03-18 09:12:37.129521434 +0000 UTC m=+510.143780100" observedRunningTime="2026-03-18 09:12:38.398602248 +0000 UTC m=+511.412860914" watchObservedRunningTime="2026-03-18 09:12:38.400197649 +0000 UTC m=+511.414456315" Mar 18 09:12:38.466912 master-0 kubenswrapper[28766]: I0318 09:12:38.465062 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" podStartSLOduration=4.514039942 podStartE2EDuration="17.465045961s" podCreationTimestamp="2026-03-18 09:12:21 +0000 UTC" firstStartedPulling="2026-03-18 09:12:24.179759008 +0000 UTC m=+497.194017674" lastFinishedPulling="2026-03-18 09:12:37.130765027 +0000 UTC m=+510.145023693" observedRunningTime="2026-03-18 09:12:38.461302744 +0000 UTC m=+511.475561410" watchObservedRunningTime="2026-03-18 09:12:38.465045961 +0000 UTC m=+511.479304627" Mar 18 09:12:38.608913 master-0 kubenswrapper[28766]: I0318 09:12:38.605598 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz" podStartSLOduration=5.287505766 podStartE2EDuration="18.605579957s" podCreationTimestamp="2026-03-18 09:12:20 +0000 UTC" firstStartedPulling="2026-03-18 09:12:23.871150722 +0000 UTC m=+496.885409388" lastFinishedPulling="2026-03-18 09:12:37.189224903 +0000 UTC m=+510.203483579" observedRunningTime="2026-03-18 09:12:38.578199767 +0000 UTC m=+511.592458443" watchObservedRunningTime="2026-03-18 09:12:38.605579957 +0000 UTC m=+511.619838623" Mar 18 09:12:42.728064 master-0 kubenswrapper[28766]: I0318 09:12:42.724169 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-88b68f8d8-w9g9k" Mar 18 09:12:51.675159 master-0 kubenswrapper[28766]: I0318 09:12:51.675075 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-69f4f7555f-6tjsm" Mar 18 09:12:52.294193 master-0 kubenswrapper[28766]: I0318 09:12:52.294119 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-65f5d58555-j282b" Mar 18 09:13:00.347789 master-0 kubenswrapper[28766]: I0318 09:13:00.347235 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6"] Mar 18 09:13:00.348421 master-0 kubenswrapper[28766]: I0318 09:13:00.348280 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:00.354714 master-0 kubenswrapper[28766]: I0318 09:13:00.351795 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 18 09:13:00.375875 master-0 kubenswrapper[28766]: I0318 09:13:00.373084 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6"] Mar 18 09:13:00.450918 master-0 kubenswrapper[28766]: I0318 09:13:00.449782 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-czkll"] Mar 18 09:13:00.459970 master-0 kubenswrapper[28766]: I0318 09:13:00.454936 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.461464 master-0 kubenswrapper[28766]: I0318 09:13:00.460612 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 18 09:13:00.467016 master-0 kubenswrapper[28766]: I0318 09:13:00.466588 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 18 09:13:00.539542 master-0 kubenswrapper[28766]: I0318 09:13:00.539457 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54a7f143-e51f-475d-9c2d-21f1c3979705-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqlq6\" (UID: \"54a7f143-e51f-475d-9c2d-21f1c3979705\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:00.539762 master-0 kubenswrapper[28766]: I0318 09:13:00.539556 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4964\" (UniqueName: \"kubernetes.io/projected/54a7f143-e51f-475d-9c2d-21f1c3979705-kube-api-access-l4964\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqlq6\" (UID: \"54a7f143-e51f-475d-9c2d-21f1c3979705\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:00.595536 master-0 kubenswrapper[28766]: I0318 09:13:00.593876 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g7bjn"] Mar 18 09:13:00.595536 master-0 kubenswrapper[28766]: I0318 09:13:00.595008 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.601546 master-0 kubenswrapper[28766]: I0318 09:13:00.601371 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 18 09:13:00.601673 master-0 kubenswrapper[28766]: I0318 09:13:00.601585 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 18 09:13:00.601771 master-0 kubenswrapper[28766]: I0318 09:13:00.601749 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 18 09:13:00.630116 master-0 kubenswrapper[28766]: I0318 09:13:00.630087 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-vthq7"] Mar 18 09:13:00.631424 master-0 kubenswrapper[28766]: I0318 09:13:00.631397 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.642710 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-conf\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.642798 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98k7h\" (UniqueName: \"kubernetes.io/projected/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-kube-api-access-98k7h\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.642823 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-metrics-certs\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.642845 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-metrics\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.642890 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-sockets\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.642916 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54a7f143-e51f-475d-9c2d-21f1c3979705-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqlq6\" (UID: \"54a7f143-e51f-475d-9c2d-21f1c3979705\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.643048 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4964\" (UniqueName: \"kubernetes.io/projected/54a7f143-e51f-475d-9c2d-21f1c3979705-kube-api-access-l4964\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqlq6\" (UID: \"54a7f143-e51f-475d-9c2d-21f1c3979705\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.643097 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-startup\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.643135 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-reloader\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.646873 master-0 kubenswrapper[28766]: I0318 09:13:00.644595 28766 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 18 09:13:00.658873 master-0 kubenswrapper[28766]: I0318 09:13:00.654708 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54a7f143-e51f-475d-9c2d-21f1c3979705-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqlq6\" (UID: \"54a7f143-e51f-475d-9c2d-21f1c3979705\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:00.686874 master-0 kubenswrapper[28766]: I0318 09:13:00.681476 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-vthq7"] Mar 18 09:13:00.713878 master-0 kubenswrapper[28766]: I0318 09:13:00.710766 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4964\" (UniqueName: \"kubernetes.io/projected/54a7f143-e51f-475d-9c2d-21f1c3979705-kube-api-access-l4964\") pod \"frr-k8s-webhook-server-bcc4b6f68-nqlq6\" (UID: \"54a7f143-e51f-475d-9c2d-21f1c3979705\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751316 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-startup\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751385 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-reloader\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751477 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b20ae6-1660-40b8-9a44-2a3989042d82-metrics-certs\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751504 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b20ae6-1660-40b8-9a44-2a3989042d82-cert\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751529 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-conf\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751568 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-metrics-certs\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751592 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmn4\" (UniqueName: \"kubernetes.io/projected/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-kube-api-access-8wmn4\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751625 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751650 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98k7h\" (UniqueName: \"kubernetes.io/projected/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-kube-api-access-98k7h\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751675 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-metrics-certs\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751705 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-metrics\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751746 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc7k7\" (UniqueName: \"kubernetes.io/projected/79b20ae6-1660-40b8-9a44-2a3989042d82-kube-api-access-wc7k7\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751771 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-metallb-excludel2\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.751878 master-0 kubenswrapper[28766]: I0318 09:13:00.751795 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-sockets\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.752485 master-0 kubenswrapper[28766]: I0318 09:13:00.752317 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-sockets\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.758105 master-0 kubenswrapper[28766]: I0318 09:13:00.753895 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-reloader\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.758105 master-0 kubenswrapper[28766]: I0318 09:13:00.754093 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-conf\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.758105 master-0 kubenswrapper[28766]: I0318 09:13:00.754646 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-metrics\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.758105 master-0 kubenswrapper[28766]: I0318 09:13:00.754959 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-frr-startup\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.779871 master-0 kubenswrapper[28766]: I0318 09:13:00.779373 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-metrics-certs\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.797878 master-0 kubenswrapper[28766]: I0318 09:13:00.795709 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98k7h\" (UniqueName: \"kubernetes.io/projected/559d9b30-44e9-4cdd-8c46-7cab6e8f2285-kube-api-access-98k7h\") pod \"frr-k8s-czkll\" (UID: \"559d9b30-44e9-4cdd-8c46-7cab6e8f2285\") " pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.810418 master-0 kubenswrapper[28766]: I0318 09:13:00.808490 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:00.854969 master-0 kubenswrapper[28766]: I0318 09:13:00.853846 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b20ae6-1660-40b8-9a44-2a3989042d82-metrics-certs\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.854969 master-0 kubenswrapper[28766]: I0318 09:13:00.853923 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b20ae6-1660-40b8-9a44-2a3989042d82-cert\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.854969 master-0 kubenswrapper[28766]: I0318 09:13:00.854007 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-metrics-certs\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.854969 master-0 kubenswrapper[28766]: I0318 09:13:00.854038 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wmn4\" (UniqueName: \"kubernetes.io/projected/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-kube-api-access-8wmn4\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.854969 master-0 kubenswrapper[28766]: I0318 09:13:00.854068 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.854969 master-0 kubenswrapper[28766]: I0318 09:13:00.854118 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc7k7\" (UniqueName: \"kubernetes.io/projected/79b20ae6-1660-40b8-9a44-2a3989042d82-kube-api-access-wc7k7\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.854969 master-0 kubenswrapper[28766]: I0318 09:13:00.854140 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-metallb-excludel2\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.855329 master-0 kubenswrapper[28766]: I0318 09:13:00.855026 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-metallb-excludel2\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.861635 master-0 kubenswrapper[28766]: I0318 09:13:00.861380 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b20ae6-1660-40b8-9a44-2a3989042d82-metrics-certs\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.866948 master-0 kubenswrapper[28766]: E0318 09:13:00.866766 28766 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 09:13:00.866948 master-0 kubenswrapper[28766]: E0318 09:13:00.866954 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist podName:8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc nodeName:}" failed. No retries permitted until 2026-03-18 09:13:01.366919706 +0000 UTC m=+534.381178372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist") pod "speaker-g7bjn" (UID: "8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc") : secret "metallb-memberlist" not found Mar 18 09:13:00.874870 master-0 kubenswrapper[28766]: I0318 09:13:00.872463 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-metrics-certs\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.874870 master-0 kubenswrapper[28766]: I0318 09:13:00.873971 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79b20ae6-1660-40b8-9a44-2a3989042d82-cert\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.910907 master-0 kubenswrapper[28766]: I0318 09:13:00.905659 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc7k7\" (UniqueName: \"kubernetes.io/projected/79b20ae6-1660-40b8-9a44-2a3989042d82-kube-api-access-wc7k7\") pod \"controller-7bb4cc7c98-vthq7\" (UID: \"79b20ae6-1660-40b8-9a44-2a3989042d82\") " pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:00.927959 master-0 kubenswrapper[28766]: I0318 09:13:00.926920 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wmn4\" (UniqueName: \"kubernetes.io/projected/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-kube-api-access-8wmn4\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:00.989877 master-0 kubenswrapper[28766]: I0318 09:13:00.987406 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:01.036875 master-0 kubenswrapper[28766]: I0318 09:13:01.033361 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:01.396430 master-0 kubenswrapper[28766]: I0318 09:13:01.396260 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:01.396937 master-0 kubenswrapper[28766]: E0318 09:13:01.396563 28766 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 09:13:01.396937 master-0 kubenswrapper[28766]: E0318 09:13:01.396644 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist podName:8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc nodeName:}" failed. No retries permitted until 2026-03-18 09:13:02.396620168 +0000 UTC m=+535.410878834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist") pod "speaker-g7bjn" (UID: "8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc") : secret "metallb-memberlist" not found Mar 18 09:13:01.577721 master-0 kubenswrapper[28766]: I0318 09:13:01.577641 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerStarted","Data":"a33d849df37f2e991bf0bef16124a71e08814a319139169f5d2be243c0cb0984"} Mar 18 09:13:01.625359 master-0 kubenswrapper[28766]: I0318 09:13:01.625282 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-vthq7"] Mar 18 09:13:01.696891 master-0 kubenswrapper[28766]: I0318 09:13:01.695430 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6"] Mar 18 09:13:02.439616 master-0 kubenswrapper[28766]: I0318 09:13:02.439560 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:02.445796 master-0 kubenswrapper[28766]: I0318 09:13:02.445738 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc-memberlist\") pod \"speaker-g7bjn\" (UID: \"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc\") " pod="metallb-system/speaker-g7bjn" Mar 18 09:13:02.576557 master-0 kubenswrapper[28766]: I0318 09:13:02.574835 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-882nf"] Mar 18 09:13:02.576557 master-0 kubenswrapper[28766]: I0318 09:13:02.576090 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" Mar 18 09:13:02.592127 master-0 kubenswrapper[28766]: I0318 09:13:02.590828 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-882nf"] Mar 18 09:13:02.595789 master-0 kubenswrapper[28766]: I0318 09:13:02.592445 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-vthq7" event={"ID":"79b20ae6-1660-40b8-9a44-2a3989042d82","Type":"ContainerStarted","Data":"456dba9205f6a374de500551ce20a22bf4c6a87a75256d9872caa1f2ff0174ec"} Mar 18 09:13:02.595789 master-0 kubenswrapper[28766]: I0318 09:13:02.592486 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-vthq7" event={"ID":"79b20ae6-1660-40b8-9a44-2a3989042d82","Type":"ContainerStarted","Data":"0e353d39c160d1a765c9c4054dac403098a6d878853793bc167e3aa98d999df9"} Mar 18 09:13:02.603035 master-0 kubenswrapper[28766]: I0318 09:13:02.600890 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh"] Mar 18 09:13:02.603035 master-0 kubenswrapper[28766]: I0318 09:13:02.602431 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:02.605452 master-0 kubenswrapper[28766]: I0318 09:13:02.605409 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" event={"ID":"54a7f143-e51f-475d-9c2d-21f1c3979705","Type":"ContainerStarted","Data":"cf761cddcf58df85dd24c9d8b6f76c4e847a295e1f646865ce95e2369251cb84"} Mar 18 09:13:02.605567 master-0 kubenswrapper[28766]: I0318 09:13:02.605445 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 18 09:13:02.628772 master-0 kubenswrapper[28766]: I0318 09:13:02.627108 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh"] Mar 18 09:13:02.653099 master-0 kubenswrapper[28766]: I0318 09:13:02.645999 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjg8\" (UniqueName: \"kubernetes.io/projected/2de37539-f3d7-47cd-a12e-4285ac38f0db-kube-api-access-phjg8\") pod \"nmstate-webhook-5f558f5558-6r6gh\" (UID: \"2de37539-f3d7-47cd-a12e-4285ac38f0db\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:02.653099 master-0 kubenswrapper[28766]: I0318 09:13:02.646090 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2tx4\" (UniqueName: \"kubernetes.io/projected/0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae-kube-api-access-q2tx4\") pod \"nmstate-metrics-9b8c8685d-882nf\" (UID: \"0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" Mar 18 09:13:02.653099 master-0 kubenswrapper[28766]: I0318 09:13:02.646138 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2de37539-f3d7-47cd-a12e-4285ac38f0db-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-6r6gh\" (UID: \"2de37539-f3d7-47cd-a12e-4285ac38f0db\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:02.669789 master-0 kubenswrapper[28766]: I0318 09:13:02.669722 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sngqk"] Mar 18 09:13:02.675460 master-0 kubenswrapper[28766]: I0318 09:13:02.671546 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.740319 master-0 kubenswrapper[28766]: I0318 09:13:02.739971 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g7bjn" Mar 18 09:13:02.747377 master-0 kubenswrapper[28766]: I0318 09:13:02.747334 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wgvz\" (UniqueName: \"kubernetes.io/projected/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-kube-api-access-8wgvz\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.747377 master-0 kubenswrapper[28766]: I0318 09:13:02.747387 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-nmstate-lock\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.747560 master-0 kubenswrapper[28766]: I0318 09:13:02.747419 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phjg8\" (UniqueName: \"kubernetes.io/projected/2de37539-f3d7-47cd-a12e-4285ac38f0db-kube-api-access-phjg8\") pod \"nmstate-webhook-5f558f5558-6r6gh\" (UID: \"2de37539-f3d7-47cd-a12e-4285ac38f0db\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:02.747560 master-0 kubenswrapper[28766]: I0318 09:13:02.747441 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-ovs-socket\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.747560 master-0 kubenswrapper[28766]: I0318 09:13:02.747467 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2tx4\" (UniqueName: \"kubernetes.io/projected/0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae-kube-api-access-q2tx4\") pod \"nmstate-metrics-9b8c8685d-882nf\" (UID: \"0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" Mar 18 09:13:02.747560 master-0 kubenswrapper[28766]: I0318 09:13:02.747511 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-dbus-socket\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.747560 master-0 kubenswrapper[28766]: I0318 09:13:02.747537 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2de37539-f3d7-47cd-a12e-4285ac38f0db-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-6r6gh\" (UID: \"2de37539-f3d7-47cd-a12e-4285ac38f0db\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:02.747707 master-0 kubenswrapper[28766]: E0318 09:13:02.747690 28766 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 18 09:13:02.747818 master-0 kubenswrapper[28766]: E0318 09:13:02.747750 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de37539-f3d7-47cd-a12e-4285ac38f0db-tls-key-pair podName:2de37539-f3d7-47cd-a12e-4285ac38f0db nodeName:}" failed. No retries permitted until 2026-03-18 09:13:03.247727598 +0000 UTC m=+536.261986264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/2de37539-f3d7-47cd-a12e-4285ac38f0db-tls-key-pair") pod "nmstate-webhook-5f558f5558-6r6gh" (UID: "2de37539-f3d7-47cd-a12e-4285ac38f0db") : secret "openshift-nmstate-webhook" not found Mar 18 09:13:02.768718 master-0 kubenswrapper[28766]: I0318 09:13:02.768595 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2tx4\" (UniqueName: \"kubernetes.io/projected/0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae-kube-api-access-q2tx4\") pod \"nmstate-metrics-9b8c8685d-882nf\" (UID: \"0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" Mar 18 09:13:02.773795 master-0 kubenswrapper[28766]: I0318 09:13:02.773512 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phjg8\" (UniqueName: \"kubernetes.io/projected/2de37539-f3d7-47cd-a12e-4285ac38f0db-kube-api-access-phjg8\") pod \"nmstate-webhook-5f558f5558-6r6gh\" (UID: \"2de37539-f3d7-47cd-a12e-4285ac38f0db\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:02.861598 master-0 kubenswrapper[28766]: I0318 09:13:02.848987 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wgvz\" (UniqueName: \"kubernetes.io/projected/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-kube-api-access-8wgvz\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.861598 master-0 kubenswrapper[28766]: I0318 09:13:02.849049 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-nmstate-lock\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.861598 master-0 kubenswrapper[28766]: I0318 09:13:02.849078 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-ovs-socket\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.861598 master-0 kubenswrapper[28766]: I0318 09:13:02.849132 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-dbus-socket\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.861598 master-0 kubenswrapper[28766]: I0318 09:13:02.849246 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-dbus-socket\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.861598 master-0 kubenswrapper[28766]: I0318 09:13:02.849598 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-nmstate-lock\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.861598 master-0 kubenswrapper[28766]: I0318 09:13:02.849622 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-ovs-socket\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.873914 master-0 kubenswrapper[28766]: I0318 09:13:02.873695 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685"] Mar 18 09:13:02.877990 master-0 kubenswrapper[28766]: I0318 09:13:02.877209 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:02.879674 master-0 kubenswrapper[28766]: I0318 09:13:02.879636 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 18 09:13:02.893953 master-0 kubenswrapper[28766]: I0318 09:13:02.889061 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 18 09:13:02.905837 master-0 kubenswrapper[28766]: I0318 09:13:02.905708 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wgvz\" (UniqueName: \"kubernetes.io/projected/f17c8a3e-2a67-4ca4-80d6-ae4177b03359-kube-api-access-8wgvz\") pod \"nmstate-handler-sngqk\" (UID: \"f17c8a3e-2a67-4ca4-80d6-ae4177b03359\") " pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:02.914282 master-0 kubenswrapper[28766]: I0318 09:13:02.914215 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" Mar 18 09:13:02.916048 master-0 kubenswrapper[28766]: I0318 09:13:02.915759 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685"] Mar 18 09:13:02.970265 master-0 kubenswrapper[28766]: I0318 09:13:02.963918 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:02.970265 master-0 kubenswrapper[28766]: I0318 09:13:02.964113 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn5jb\" (UniqueName: \"kubernetes.io/projected/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-kube-api-access-gn5jb\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:02.970265 master-0 kubenswrapper[28766]: I0318 09:13:02.964149 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:03.005893 master-0 kubenswrapper[28766]: I0318 09:13:03.004356 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:03.074885 master-0 kubenswrapper[28766]: I0318 09:13:03.070197 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:03.074885 master-0 kubenswrapper[28766]: I0318 09:13:03.070362 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn5jb\" (UniqueName: \"kubernetes.io/projected/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-kube-api-access-gn5jb\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:03.074885 master-0 kubenswrapper[28766]: I0318 09:13:03.070386 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:03.074885 master-0 kubenswrapper[28766]: E0318 09:13:03.071023 28766 secret.go:189] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 18 09:13:03.074885 master-0 kubenswrapper[28766]: E0318 09:13:03.071113 28766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-plugin-serving-cert podName:d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf nodeName:}" failed. No retries permitted until 2026-03-18 09:13:03.571092257 +0000 UTC m=+536.585350923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-tv685" (UID: "d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf") : secret "plugin-serving-cert" not found Mar 18 09:13:03.074885 master-0 kubenswrapper[28766]: I0318 09:13:03.071268 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:03.111096 master-0 kubenswrapper[28766]: I0318 09:13:03.109156 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn5jb\" (UniqueName: \"kubernetes.io/projected/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-kube-api-access-gn5jb\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:03.275915 master-0 kubenswrapper[28766]: I0318 09:13:03.274012 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2de37539-f3d7-47cd-a12e-4285ac38f0db-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-6r6gh\" (UID: \"2de37539-f3d7-47cd-a12e-4285ac38f0db\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:03.291894 master-0 kubenswrapper[28766]: I0318 09:13:03.291780 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2de37539-f3d7-47cd-a12e-4285ac38f0db-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-6r6gh\" (UID: \"2de37539-f3d7-47cd-a12e-4285ac38f0db\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:03.339239 master-0 kubenswrapper[28766]: I0318 09:13:03.336517 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bd9fbc6c9-5fb2s"] Mar 18 09:13:03.351878 master-0 kubenswrapper[28766]: I0318 09:13:03.349813 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.387900 master-0 kubenswrapper[28766]: I0318 09:13:03.385834 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bd9fbc6c9-5fb2s"] Mar 18 09:13:03.387900 master-0 kubenswrapper[28766]: I0318 09:13:03.386845 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-config\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.387900 master-0 kubenswrapper[28766]: I0318 09:13:03.386909 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-serving-cert\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.387900 master-0 kubenswrapper[28766]: I0318 09:13:03.386961 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-service-ca\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.387900 master-0 kubenswrapper[28766]: I0318 09:13:03.386987 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-oauth-config\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.387900 master-0 kubenswrapper[28766]: I0318 09:13:03.387016 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-oauth-serving-cert\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.387900 master-0 kubenswrapper[28766]: I0318 09:13:03.387155 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qsn7\" (UniqueName: \"kubernetes.io/projected/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-kube-api-access-4qsn7\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.387900 master-0 kubenswrapper[28766]: I0318 09:13:03.387189 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-trusted-ca-bundle\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.494099 master-0 kubenswrapper[28766]: I0318 09:13:03.491946 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qsn7\" (UniqueName: \"kubernetes.io/projected/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-kube-api-access-4qsn7\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.494099 master-0 kubenswrapper[28766]: I0318 09:13:03.491999 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-trusted-ca-bundle\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.494099 master-0 kubenswrapper[28766]: I0318 09:13:03.492040 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-config\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.494099 master-0 kubenswrapper[28766]: I0318 09:13:03.492059 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-serving-cert\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.494099 master-0 kubenswrapper[28766]: I0318 09:13:03.492089 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-service-ca\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.494099 master-0 kubenswrapper[28766]: I0318 09:13:03.492108 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-oauth-config\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.494099 master-0 kubenswrapper[28766]: I0318 09:13:03.492132 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-oauth-serving-cert\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.494099 master-0 kubenswrapper[28766]: I0318 09:13:03.492934 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-oauth-serving-cert\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.499973 master-0 kubenswrapper[28766]: I0318 09:13:03.495269 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-trusted-ca-bundle\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.499973 master-0 kubenswrapper[28766]: I0318 09:13:03.495813 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-config\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.499973 master-0 kubenswrapper[28766]: I0318 09:13:03.498131 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-service-ca\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.521479 master-0 kubenswrapper[28766]: I0318 09:13:03.521442 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-oauth-config\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.527868 master-0 kubenswrapper[28766]: I0318 09:13:03.527291 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-console-serving-cert\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.619408 master-0 kubenswrapper[28766]: I0318 09:13:03.616198 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qsn7\" (UniqueName: \"kubernetes.io/projected/9c8fd6d0-1769-42fe-9d88-26640a4a3c2f-kube-api-access-4qsn7\") pod \"console-bd9fbc6c9-5fb2s\" (UID: \"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f\") " pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.636192 master-0 kubenswrapper[28766]: I0318 09:13:03.626114 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:03.638663 master-0 kubenswrapper[28766]: I0318 09:13:03.637880 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:03.653430 master-0 kubenswrapper[28766]: I0318 09:13:03.652434 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-tv685\" (UID: \"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:03.705912 master-0 kubenswrapper[28766]: I0318 09:13:03.693474 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:03.752202 master-0 kubenswrapper[28766]: I0318 09:13:03.749675 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g7bjn" event={"ID":"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc","Type":"ContainerStarted","Data":"f71843d0eb096f048d1bd76220f43daefdd13d8f8f7f068f43096fb7cad0a0ea"} Mar 18 09:13:03.752202 master-0 kubenswrapper[28766]: I0318 09:13:03.749740 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g7bjn" event={"ID":"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc","Type":"ContainerStarted","Data":"cace948db577f11357e720d58c6f464ede555261fd04671dde2c28d6809e3754"} Mar 18 09:13:03.761237 master-0 kubenswrapper[28766]: I0318 09:13:03.760243 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sngqk" event={"ID":"f17c8a3e-2a67-4ca4-80d6-ae4177b03359","Type":"ContainerStarted","Data":"92deea6629b5718f5d530ce77caf2fa6a19498a574da655dd4c92f8193c24920"} Mar 18 09:13:03.802807 master-0 kubenswrapper[28766]: I0318 09:13:03.802708 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-882nf"] Mar 18 09:13:03.875619 master-0 kubenswrapper[28766]: I0318 09:13:03.875528 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" Mar 18 09:13:04.353049 master-0 kubenswrapper[28766]: I0318 09:13:04.352947 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bd9fbc6c9-5fb2s"] Mar 18 09:13:04.482641 master-0 kubenswrapper[28766]: W0318 09:13:04.482436 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2de37539_f3d7_47cd_a12e_4285ac38f0db.slice/crio-343e37cce2f4dcaea811279814ee172c0f3e92d7e58f79fcade4928c91ab79ea WatchSource:0}: Error finding container 343e37cce2f4dcaea811279814ee172c0f3e92d7e58f79fcade4928c91ab79ea: Status 404 returned error can't find the container with id 343e37cce2f4dcaea811279814ee172c0f3e92d7e58f79fcade4928c91ab79ea Mar 18 09:13:04.495302 master-0 kubenswrapper[28766]: I0318 09:13:04.493590 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh"] Mar 18 09:13:04.572534 master-0 kubenswrapper[28766]: W0318 09:13:04.572451 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8cbb83c_f7ff_44b0_afe0_dca20fab3ebf.slice/crio-572cf1868c4cb31b769ebc5670442c12cd3e30471b4941de3673acd2a8554483 WatchSource:0}: Error finding container 572cf1868c4cb31b769ebc5670442c12cd3e30471b4941de3673acd2a8554483: Status 404 returned error can't find the container with id 572cf1868c4cb31b769ebc5670442c12cd3e30471b4941de3673acd2a8554483 Mar 18 09:13:04.577436 master-0 kubenswrapper[28766]: I0318 09:13:04.577381 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685"] Mar 18 09:13:04.771511 master-0 kubenswrapper[28766]: I0318 09:13:04.771429 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd9fbc6c9-5fb2s" event={"ID":"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f","Type":"ContainerStarted","Data":"fe9162bf3dc0c2a9eacdfdc1e37424649d5d1cef8ad2c0762079616de685ab6b"} Mar 18 09:13:04.771511 master-0 kubenswrapper[28766]: I0318 09:13:04.771515 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd9fbc6c9-5fb2s" event={"ID":"9c8fd6d0-1769-42fe-9d88-26640a4a3c2f","Type":"ContainerStarted","Data":"917443ebe76700611ead019e6337c345e08215459c894b91358be7bc479e7bf2"} Mar 18 09:13:04.795021 master-0 kubenswrapper[28766]: I0318 09:13:04.774714 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-vthq7" event={"ID":"79b20ae6-1660-40b8-9a44-2a3989042d82","Type":"ContainerStarted","Data":"3812488a342dbaa3f41f28f26081c7cbe7eb92989c15a33cfa528798742079c1"} Mar 18 09:13:04.795021 master-0 kubenswrapper[28766]: I0318 09:13:04.775098 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:04.795021 master-0 kubenswrapper[28766]: I0318 09:13:04.788242 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g7bjn" event={"ID":"8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc","Type":"ContainerStarted","Data":"d2fd8716cf4d75addcf22b80f2bb0971d4ca97ffe7d7bcfe8c4c5a03b894ff7b"} Mar 18 09:13:04.795021 master-0 kubenswrapper[28766]: I0318 09:13:04.789153 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g7bjn" Mar 18 09:13:04.795021 master-0 kubenswrapper[28766]: I0318 09:13:04.793626 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" event={"ID":"2de37539-f3d7-47cd-a12e-4285ac38f0db","Type":"ContainerStarted","Data":"343e37cce2f4dcaea811279814ee172c0f3e92d7e58f79fcade4928c91ab79ea"} Mar 18 09:13:04.795951 master-0 kubenswrapper[28766]: I0318 09:13:04.795707 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" event={"ID":"0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae","Type":"ContainerStarted","Data":"0fedb5bd9770c02fcdb97ad99d77568760647b63bfb023a13a08d4c6dca3bc2b"} Mar 18 09:13:04.796994 master-0 kubenswrapper[28766]: I0318 09:13:04.796942 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" event={"ID":"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf","Type":"ContainerStarted","Data":"572cf1868c4cb31b769ebc5670442c12cd3e30471b4941de3673acd2a8554483"} Mar 18 09:13:04.818158 master-0 kubenswrapper[28766]: I0318 09:13:04.818017 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bd9fbc6c9-5fb2s" podStartSLOduration=1.817995347 podStartE2EDuration="1.817995347s" podCreationTimestamp="2026-03-18 09:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:13:04.817583456 +0000 UTC m=+537.831842122" watchObservedRunningTime="2026-03-18 09:13:04.817995347 +0000 UTC m=+537.832254013" Mar 18 09:13:04.855922 master-0 kubenswrapper[28766]: I0318 09:13:04.854380 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g7bjn" podStartSLOduration=4.85436367 podStartE2EDuration="4.85436367s" podCreationTimestamp="2026-03-18 09:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:13:04.839048533 +0000 UTC m=+537.853307199" watchObservedRunningTime="2026-03-18 09:13:04.85436367 +0000 UTC m=+537.868622336" Mar 18 09:13:04.880333 master-0 kubenswrapper[28766]: I0318 09:13:04.877725 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-vthq7" podStartSLOduration=3.400189774 podStartE2EDuration="4.877702535s" podCreationTimestamp="2026-03-18 09:13:00 +0000 UTC" firstStartedPulling="2026-03-18 09:13:01.846149659 +0000 UTC m=+534.860408325" lastFinishedPulling="2026-03-18 09:13:03.32366242 +0000 UTC m=+536.337921086" observedRunningTime="2026-03-18 09:13:04.877582902 +0000 UTC m=+537.891841578" watchObservedRunningTime="2026-03-18 09:13:04.877702535 +0000 UTC m=+537.891961201" Mar 18 09:13:10.860017 master-0 kubenswrapper[28766]: I0318 09:13:10.859931 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" event={"ID":"d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf","Type":"ContainerStarted","Data":"9f2e612a64768879bb3776debe6ecfbaf51397d32fb6ec0bd5cdd83b98baa9d1"} Mar 18 09:13:10.862591 master-0 kubenswrapper[28766]: I0318 09:13:10.862253 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" event={"ID":"54a7f143-e51f-475d-9c2d-21f1c3979705","Type":"ContainerStarted","Data":"91e564629d053953a4bba11df4c884f42e3a6e00d8db62d0c7bba773558b88e8"} Mar 18 09:13:10.862591 master-0 kubenswrapper[28766]: I0318 09:13:10.862476 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:10.864687 master-0 kubenswrapper[28766]: I0318 09:13:10.864629 28766 generic.go:334] "Generic (PLEG): container finished" podID="559d9b30-44e9-4cdd-8c46-7cab6e8f2285" containerID="b790cfab1ad07b533e7b5558a0c5d6b2ba236dea60582268d423d4c2ee4de8c6" exitCode=0 Mar 18 09:13:10.864809 master-0 kubenswrapper[28766]: I0318 09:13:10.864694 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerDied","Data":"b790cfab1ad07b533e7b5558a0c5d6b2ba236dea60582268d423d4c2ee4de8c6"} Mar 18 09:13:10.870395 master-0 kubenswrapper[28766]: I0318 09:13:10.870161 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" event={"ID":"2de37539-f3d7-47cd-a12e-4285ac38f0db","Type":"ContainerStarted","Data":"8b17eb75a5721476b7d14dde281f38b4e6480cba2f8db24372ece465c89189ae"} Mar 18 09:13:10.871353 master-0 kubenswrapper[28766]: I0318 09:13:10.871305 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:10.873202 master-0 kubenswrapper[28766]: I0318 09:13:10.873063 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sngqk" event={"ID":"f17c8a3e-2a67-4ca4-80d6-ae4177b03359","Type":"ContainerStarted","Data":"d4d554afbfe2211a7f899da1dab3eb05cc3bd946894e6e76362f7bdd70f04ec6"} Mar 18 09:13:10.874040 master-0 kubenswrapper[28766]: I0318 09:13:10.873991 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:10.880219 master-0 kubenswrapper[28766]: I0318 09:13:10.880158 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" event={"ID":"0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae","Type":"ContainerStarted","Data":"10bc66d100d6984f6fbc7e913a4a6a4bce71597e1ae12cd7879498b4fd7d08ea"} Mar 18 09:13:10.880348 master-0 kubenswrapper[28766]: I0318 09:13:10.880256 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" event={"ID":"0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae","Type":"ContainerStarted","Data":"550e23596e7ef7b7c0df04dcc6104a5e4eef81acbfbfba09f2e88112e3862d49"} Mar 18 09:13:10.890752 master-0 kubenswrapper[28766]: I0318 09:13:10.890460 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tv685" podStartSLOduration=3.8661779689999998 podStartE2EDuration="8.890400089s" podCreationTimestamp="2026-03-18 09:13:02 +0000 UTC" firstStartedPulling="2026-03-18 09:13:04.574955091 +0000 UTC m=+537.589213757" lastFinishedPulling="2026-03-18 09:13:09.599177211 +0000 UTC m=+542.613435877" observedRunningTime="2026-03-18 09:13:10.888373896 +0000 UTC m=+543.902632602" watchObservedRunningTime="2026-03-18 09:13:10.890400089 +0000 UTC m=+543.904658765" Mar 18 09:13:10.980803 master-0 kubenswrapper[28766]: I0318 09:13:10.980661 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" podStartSLOduration=3.053885971 podStartE2EDuration="10.980632769s" podCreationTimestamp="2026-03-18 09:13:00 +0000 UTC" firstStartedPulling="2026-03-18 09:13:01.716596439 +0000 UTC m=+534.730855105" lastFinishedPulling="2026-03-18 09:13:09.643343237 +0000 UTC m=+542.657601903" observedRunningTime="2026-03-18 09:13:10.972203471 +0000 UTC m=+543.986462167" watchObservedRunningTime="2026-03-18 09:13:10.980632769 +0000 UTC m=+543.994891435" Mar 18 09:13:11.022675 master-0 kubenswrapper[28766]: I0318 09:13:11.021492 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sngqk" podStartSLOduration=2.536585126 podStartE2EDuration="9.021464539s" podCreationTimestamp="2026-03-18 09:13:02 +0000 UTC" firstStartedPulling="2026-03-18 09:13:03.117962433 +0000 UTC m=+536.132221099" lastFinishedPulling="2026-03-18 09:13:09.602841846 +0000 UTC m=+542.617100512" observedRunningTime="2026-03-18 09:13:10.997331623 +0000 UTC m=+544.011590289" watchObservedRunningTime="2026-03-18 09:13:11.021464539 +0000 UTC m=+544.035723225" Mar 18 09:13:11.041629 master-0 kubenswrapper[28766]: I0318 09:13:11.041576 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-vthq7" Mar 18 09:13:11.049072 master-0 kubenswrapper[28766]: I0318 09:13:11.047017 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-882nf" podStartSLOduration=3.28491868 podStartE2EDuration="9.046995451s" podCreationTimestamp="2026-03-18 09:13:02 +0000 UTC" firstStartedPulling="2026-03-18 09:13:03.836265948 +0000 UTC m=+536.850524614" lastFinishedPulling="2026-03-18 09:13:09.598342729 +0000 UTC m=+542.612601385" observedRunningTime="2026-03-18 09:13:11.045094552 +0000 UTC m=+544.059353218" watchObservedRunningTime="2026-03-18 09:13:11.046995451 +0000 UTC m=+544.061254117" Mar 18 09:13:11.080591 master-0 kubenswrapper[28766]: I0318 09:13:11.079335 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" podStartSLOduration=3.97802289 podStartE2EDuration="9.079313409s" podCreationTimestamp="2026-03-18 09:13:02 +0000 UTC" firstStartedPulling="2026-03-18 09:13:04.49743441 +0000 UTC m=+537.511693076" lastFinishedPulling="2026-03-18 09:13:09.598724919 +0000 UTC m=+542.612983595" observedRunningTime="2026-03-18 09:13:11.078530269 +0000 UTC m=+544.092788945" watchObservedRunningTime="2026-03-18 09:13:11.079313409 +0000 UTC m=+544.093572115" Mar 18 09:13:11.892137 master-0 kubenswrapper[28766]: I0318 09:13:11.892052 28766 generic.go:334] "Generic (PLEG): container finished" podID="559d9b30-44e9-4cdd-8c46-7cab6e8f2285" containerID="6d2873794bfd79fad41c43d442d4ae68b2fd6e1bb478884913a983d2142d8c9f" exitCode=0 Mar 18 09:13:11.892680 master-0 kubenswrapper[28766]: I0318 09:13:11.892161 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerDied","Data":"6d2873794bfd79fad41c43d442d4ae68b2fd6e1bb478884913a983d2142d8c9f"} Mar 18 09:13:12.911186 master-0 kubenswrapper[28766]: I0318 09:13:12.909962 28766 generic.go:334] "Generic (PLEG): container finished" podID="559d9b30-44e9-4cdd-8c46-7cab6e8f2285" containerID="0b7bc4eacbf5b257f76c184f7446468e3967668fb6c7ee582b52acce565d2a1d" exitCode=0 Mar 18 09:13:12.911186 master-0 kubenswrapper[28766]: I0318 09:13:12.911077 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerDied","Data":"0b7bc4eacbf5b257f76c184f7446468e3967668fb6c7ee582b52acce565d2a1d"} Mar 18 09:13:13.693964 master-0 kubenswrapper[28766]: I0318 09:13:13.693899 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:13.693964 master-0 kubenswrapper[28766]: I0318 09:13:13.693961 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:13.701682 master-0 kubenswrapper[28766]: I0318 09:13:13.701574 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:13.927713 master-0 kubenswrapper[28766]: I0318 09:13:13.927627 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerStarted","Data":"ca42cd668aaa361556fc3fc3094b6a7d3c15f8fed9deca9fed28245817ce673b"} Mar 18 09:13:13.927713 master-0 kubenswrapper[28766]: I0318 09:13:13.927697 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerStarted","Data":"98266033e3a45eafb13f6798f33f8f5b04523765c2da12ea10e65c4890238a01"} Mar 18 09:13:13.927713 master-0 kubenswrapper[28766]: I0318 09:13:13.927713 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerStarted","Data":"b33c9fdc76ac82dd55d03f96afb491ef2059e0dadc1a3f9714fd303009259a10"} Mar 18 09:13:13.927713 master-0 kubenswrapper[28766]: I0318 09:13:13.927728 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerStarted","Data":"7bc716ed3307bb50f34550eef0a8f94cfb7d033b99bdb8d23ea4fa0833378f52"} Mar 18 09:13:13.934013 master-0 kubenswrapper[28766]: I0318 09:13:13.932398 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bd9fbc6c9-5fb2s" Mar 18 09:13:14.620692 master-0 kubenswrapper[28766]: I0318 09:13:14.592390 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-657dc898cd-mhjh7"] Mar 18 09:13:14.954259 master-0 kubenswrapper[28766]: I0318 09:13:14.954076 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerStarted","Data":"9e6c6de29cf16740fa878877af0205f610f8b2b0614a8bc3d692b438d54b9287"} Mar 18 09:13:14.954259 master-0 kubenswrapper[28766]: I0318 09:13:14.954183 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-czkll" event={"ID":"559d9b30-44e9-4cdd-8c46-7cab6e8f2285","Type":"ContainerStarted","Data":"8d6d0f6117f90281096fcf30f57b3646018668652ca33b5f0df497231649a8ac"} Mar 18 09:13:14.954963 master-0 kubenswrapper[28766]: I0318 09:13:14.954345 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:14.987394 master-0 kubenswrapper[28766]: I0318 09:13:14.987212 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-czkll" podStartSLOduration=6.362955732 podStartE2EDuration="14.987170963s" podCreationTimestamp="2026-03-18 09:13:00 +0000 UTC" firstStartedPulling="2026-03-18 09:13:01.004542457 +0000 UTC m=+534.018801123" lastFinishedPulling="2026-03-18 09:13:09.628757688 +0000 UTC m=+542.643016354" observedRunningTime="2026-03-18 09:13:14.985988891 +0000 UTC m=+548.000247597" watchObservedRunningTime="2026-03-18 09:13:14.987170963 +0000 UTC m=+548.001429669" Mar 18 09:13:15.809993 master-0 kubenswrapper[28766]: I0318 09:13:15.809919 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:15.861547 master-0 kubenswrapper[28766]: I0318 09:13:15.861469 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:18.028295 master-0 kubenswrapper[28766]: I0318 09:13:18.028241 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sngqk" Mar 18 09:13:20.992751 master-0 kubenswrapper[28766]: I0318 09:13:20.992630 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nqlq6" Mar 18 09:13:22.744459 master-0 kubenswrapper[28766]: I0318 09:13:22.744368 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g7bjn" Mar 18 09:13:23.636505 master-0 kubenswrapper[28766]: I0318 09:13:23.635999 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-6r6gh" Mar 18 09:13:28.668949 master-0 kubenswrapper[28766]: I0318 09:13:28.668903 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-h82tw"] Mar 18 09:13:28.670610 master-0 kubenswrapper[28766]: I0318 09:13:28.670589 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.672499 master-0 kubenswrapper[28766]: I0318 09:13:28.672483 28766 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 18 09:13:28.685471 master-0 kubenswrapper[28766]: I0318 09:13:28.685433 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-h82tw"] Mar 18 09:13:28.771195 master-0 kubenswrapper[28766]: I0318 09:13:28.771134 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-file-lock-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.771536 master-0 kubenswrapper[28766]: I0318 09:13:28.771507 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-pod-volumes-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.771712 master-0 kubenswrapper[28766]: I0318 09:13:28.771686 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-registration-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.771944 master-0 kubenswrapper[28766]: I0318 09:13:28.771916 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-run-udev\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.772210 master-0 kubenswrapper[28766]: I0318 09:13:28.772181 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e21746-efd7-40be-98d2-e4ef28aa2713-metrics-cert\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.772365 master-0 kubenswrapper[28766]: I0318 09:13:28.772340 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-node-plugin-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.772605 master-0 kubenswrapper[28766]: I0318 09:13:28.772577 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp75x\" (UniqueName: \"kubernetes.io/projected/91e21746-efd7-40be-98d2-e4ef28aa2713-kube-api-access-mp75x\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.772824 master-0 kubenswrapper[28766]: I0318 09:13:28.772794 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-csi-plugin-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.773089 master-0 kubenswrapper[28766]: I0318 09:13:28.773062 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-device-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.773264 master-0 kubenswrapper[28766]: I0318 09:13:28.773236 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-lvmd-config\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.773457 master-0 kubenswrapper[28766]: I0318 09:13:28.773432 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-sys\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.875922 master-0 kubenswrapper[28766]: I0318 09:13:28.875799 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-csi-plugin-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.875922 master-0 kubenswrapper[28766]: I0318 09:13:28.875912 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-device-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.875942 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-lvmd-config\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.875975 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-sys\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.876030 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-file-lock-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.876052 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-pod-volumes-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.876083 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-registration-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.876108 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-run-udev\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.876137 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e21746-efd7-40be-98d2-e4ef28aa2713-metrics-cert\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.876160 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-node-plugin-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.876310 master-0 kubenswrapper[28766]: I0318 09:13:28.876219 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp75x\" (UniqueName: \"kubernetes.io/projected/91e21746-efd7-40be-98d2-e4ef28aa2713-kube-api-access-mp75x\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.877518 master-0 kubenswrapper[28766]: I0318 09:13:28.876836 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-csi-plugin-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.877518 master-0 kubenswrapper[28766]: I0318 09:13:28.876918 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-device-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.877518 master-0 kubenswrapper[28766]: I0318 09:13:28.877074 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-lvmd-config\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.877518 master-0 kubenswrapper[28766]: I0318 09:13:28.877152 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-registration-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.877518 master-0 kubenswrapper[28766]: I0318 09:13:28.877212 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-run-udev\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.877518 master-0 kubenswrapper[28766]: I0318 09:13:28.877307 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-sys\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.877518 master-0 kubenswrapper[28766]: I0318 09:13:28.877379 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-pod-volumes-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.877518 master-0 kubenswrapper[28766]: I0318 09:13:28.877522 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-file-lock-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.878485 master-0 kubenswrapper[28766]: I0318 09:13:28.877754 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/91e21746-efd7-40be-98d2-e4ef28aa2713-node-plugin-dir\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.896266 master-0 kubenswrapper[28766]: I0318 09:13:28.892256 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/91e21746-efd7-40be-98d2-e4ef28aa2713-metrics-cert\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.899563 master-0 kubenswrapper[28766]: I0318 09:13:28.899496 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp75x\" (UniqueName: \"kubernetes.io/projected/91e21746-efd7-40be-98d2-e4ef28aa2713-kube-api-access-mp75x\") pod \"vg-manager-h82tw\" (UID: \"91e21746-efd7-40be-98d2-e4ef28aa2713\") " pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:28.985426 master-0 kubenswrapper[28766]: I0318 09:13:28.985375 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:29.408351 master-0 kubenswrapper[28766]: I0318 09:13:29.408302 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-h82tw"] Mar 18 09:13:29.411259 master-0 kubenswrapper[28766]: W0318 09:13:29.411205 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e21746_efd7_40be_98d2_e4ef28aa2713.slice/crio-dfec4d9be7b3c594276d8b443b2d0c680262b18b2e6e48a846406ca813498d58 WatchSource:0}: Error finding container dfec4d9be7b3c594276d8b443b2d0c680262b18b2e6e48a846406ca813498d58: Status 404 returned error can't find the container with id dfec4d9be7b3c594276d8b443b2d0c680262b18b2e6e48a846406ca813498d58 Mar 18 09:13:30.113625 master-0 kubenswrapper[28766]: I0318 09:13:30.113556 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-h82tw" event={"ID":"91e21746-efd7-40be-98d2-e4ef28aa2713","Type":"ContainerStarted","Data":"8474734fcfb35878881745ab5b30a432043e1c59ef624c4252a070ff9c624d8e"} Mar 18 09:13:30.113625 master-0 kubenswrapper[28766]: I0318 09:13:30.113610 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-h82tw" event={"ID":"91e21746-efd7-40be-98d2-e4ef28aa2713","Type":"ContainerStarted","Data":"dfec4d9be7b3c594276d8b443b2d0c680262b18b2e6e48a846406ca813498d58"} Mar 18 09:13:30.141713 master-0 kubenswrapper[28766]: I0318 09:13:30.141603 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-h82tw" podStartSLOduration=2.141579718 podStartE2EDuration="2.141579718s" podCreationTimestamp="2026-03-18 09:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:13:30.14012014 +0000 UTC m=+563.154378806" watchObservedRunningTime="2026-03-18 09:13:30.141579718 +0000 UTC m=+563.155838414" Mar 18 09:13:30.813095 master-0 kubenswrapper[28766]: I0318 09:13:30.812981 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-czkll" Mar 18 09:13:32.140734 master-0 kubenswrapper[28766]: I0318 09:13:32.140676 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-h82tw_91e21746-efd7-40be-98d2-e4ef28aa2713/vg-manager/0.log" Mar 18 09:13:32.141299 master-0 kubenswrapper[28766]: I0318 09:13:32.140749 28766 generic.go:334] "Generic (PLEG): container finished" podID="91e21746-efd7-40be-98d2-e4ef28aa2713" containerID="8474734fcfb35878881745ab5b30a432043e1c59ef624c4252a070ff9c624d8e" exitCode=1 Mar 18 09:13:32.141299 master-0 kubenswrapper[28766]: I0318 09:13:32.140787 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-h82tw" event={"ID":"91e21746-efd7-40be-98d2-e4ef28aa2713","Type":"ContainerDied","Data":"8474734fcfb35878881745ab5b30a432043e1c59ef624c4252a070ff9c624d8e"} Mar 18 09:13:32.142482 master-0 kubenswrapper[28766]: I0318 09:13:32.141510 28766 scope.go:117] "RemoveContainer" containerID="8474734fcfb35878881745ab5b30a432043e1c59ef624c4252a070ff9c624d8e" Mar 18 09:13:32.489158 master-0 kubenswrapper[28766]: I0318 09:13:32.489054 28766 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 18 09:13:33.149728 master-0 kubenswrapper[28766]: I0318 09:13:33.149666 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-h82tw_91e21746-efd7-40be-98d2-e4ef28aa2713/vg-manager/0.log" Mar 18 09:13:33.149728 master-0 kubenswrapper[28766]: I0318 09:13:33.149724 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-h82tw" event={"ID":"91e21746-efd7-40be-98d2-e4ef28aa2713","Type":"ContainerStarted","Data":"46f56915f74b20ff195508ae902383bf6be5af9387f013a6ad776df0a6e6cc0e"} Mar 18 09:13:33.231470 master-0 kubenswrapper[28766]: I0318 09:13:33.231345 28766 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-18T09:13:32.489116998Z","Handler":null,"Name":""} Mar 18 09:13:33.236527 master-0 kubenswrapper[28766]: I0318 09:13:33.236473 28766 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 18 09:13:33.236527 master-0 kubenswrapper[28766]: I0318 09:13:33.236518 28766 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 18 09:13:38.985642 master-0 kubenswrapper[28766]: I0318 09:13:38.985587 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:38.987832 master-0 kubenswrapper[28766]: I0318 09:13:38.987742 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:39.215004 master-0 kubenswrapper[28766]: I0318 09:13:39.214921 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:39.216235 master-0 kubenswrapper[28766]: I0318 09:13:39.216191 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-h82tw" Mar 18 09:13:39.668224 master-0 kubenswrapper[28766]: I0318 09:13:39.668120 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-657dc898cd-mhjh7" podUID="8b6dbc8f-2a16-4c68-a049-1f5b271623ff" containerName="console" containerID="cri-o://f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3" gracePeriod=15 Mar 18 09:13:40.179955 master-0 kubenswrapper[28766]: I0318 09:13:40.179905 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-657dc898cd-mhjh7_8b6dbc8f-2a16-4c68-a049-1f5b271623ff/console/0.log" Mar 18 09:13:40.180553 master-0 kubenswrapper[28766]: I0318 09:13:40.180003 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:13:40.239821 master-0 kubenswrapper[28766]: I0318 09:13:40.235287 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-657dc898cd-mhjh7_8b6dbc8f-2a16-4c68-a049-1f5b271623ff/console/0.log" Mar 18 09:13:40.239821 master-0 kubenswrapper[28766]: I0318 09:13:40.235352 28766 generic.go:334] "Generic (PLEG): container finished" podID="8b6dbc8f-2a16-4c68-a049-1f5b271623ff" containerID="f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3" exitCode=2 Mar 18 09:13:40.239821 master-0 kubenswrapper[28766]: I0318 09:13:40.235398 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-657dc898cd-mhjh7" Mar 18 09:13:40.239821 master-0 kubenswrapper[28766]: I0318 09:13:40.235872 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-657dc898cd-mhjh7" event={"ID":"8b6dbc8f-2a16-4c68-a049-1f5b271623ff","Type":"ContainerDied","Data":"f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3"} Mar 18 09:13:40.239821 master-0 kubenswrapper[28766]: I0318 09:13:40.235904 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-657dc898cd-mhjh7" event={"ID":"8b6dbc8f-2a16-4c68-a049-1f5b271623ff","Type":"ContainerDied","Data":"a37b58a26a583b730242a5866957d99a663ec889b8f7223d9b8898968f3b61bb"} Mar 18 09:13:40.239821 master-0 kubenswrapper[28766]: I0318 09:13:40.235926 28766 scope.go:117] "RemoveContainer" containerID="f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3" Mar 18 09:13:40.273001 master-0 kubenswrapper[28766]: I0318 09:13:40.272838 28766 scope.go:117] "RemoveContainer" containerID="f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3" Mar 18 09:13:40.273338 master-0 kubenswrapper[28766]: E0318 09:13:40.273310 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3\": container with ID starting with f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3 not found: ID does not exist" containerID="f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3" Mar 18 09:13:40.273430 master-0 kubenswrapper[28766]: I0318 09:13:40.273341 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3"} err="failed to get container status \"f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3\": rpc error: code = NotFound desc = could not find container \"f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3\": container with ID starting with f0735329b2e80c40c6d094fd9c879ce76f9fa645657bcbd9d8ba68a1fc0e82e3 not found: ID does not exist" Mar 18 09:13:40.337812 master-0 kubenswrapper[28766]: I0318 09:13:40.337696 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-oauth-serving-cert\") pod \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " Mar 18 09:13:40.338244 master-0 kubenswrapper[28766]: I0318 09:13:40.337973 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcwp6\" (UniqueName: \"kubernetes.io/projected/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-kube-api-access-mcwp6\") pod \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " Mar 18 09:13:40.338244 master-0 kubenswrapper[28766]: I0318 09:13:40.338020 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-oauth-config\") pod \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " Mar 18 09:13:40.338244 master-0 kubenswrapper[28766]: I0318 09:13:40.338146 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-trusted-ca-bundle\") pod \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " Mar 18 09:13:40.338244 master-0 kubenswrapper[28766]: I0318 09:13:40.338188 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8b6dbc8f-2a16-4c68-a049-1f5b271623ff" (UID: "8b6dbc8f-2a16-4c68-a049-1f5b271623ff"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:13:40.338244 master-0 kubenswrapper[28766]: I0318 09:13:40.338220 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-config\") pod \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " Mar 18 09:13:40.338716 master-0 kubenswrapper[28766]: I0318 09:13:40.338282 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-serving-cert\") pod \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " Mar 18 09:13:40.338716 master-0 kubenswrapper[28766]: I0318 09:13:40.338314 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-service-ca\") pod \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\" (UID: \"8b6dbc8f-2a16-4c68-a049-1f5b271623ff\") " Mar 18 09:13:40.338963 master-0 kubenswrapper[28766]: I0318 09:13:40.338809 28766 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:13:40.340652 master-0 kubenswrapper[28766]: I0318 09:13:40.340426 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-service-ca" (OuterVolumeSpecName: "service-ca") pod "8b6dbc8f-2a16-4c68-a049-1f5b271623ff" (UID: "8b6dbc8f-2a16-4c68-a049-1f5b271623ff"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:13:40.341295 master-0 kubenswrapper[28766]: I0318 09:13:40.340775 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8b6dbc8f-2a16-4c68-a049-1f5b271623ff" (UID: "8b6dbc8f-2a16-4c68-a049-1f5b271623ff"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:13:40.341295 master-0 kubenswrapper[28766]: I0318 09:13:40.341169 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8b6dbc8f-2a16-4c68-a049-1f5b271623ff" (UID: "8b6dbc8f-2a16-4c68-a049-1f5b271623ff"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:13:40.341295 master-0 kubenswrapper[28766]: I0318 09:13:40.341221 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-config" (OuterVolumeSpecName: "console-config") pod "8b6dbc8f-2a16-4c68-a049-1f5b271623ff" (UID: "8b6dbc8f-2a16-4c68-a049-1f5b271623ff"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:13:40.341818 master-0 kubenswrapper[28766]: I0318 09:13:40.341770 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-kube-api-access-mcwp6" (OuterVolumeSpecName: "kube-api-access-mcwp6") pod "8b6dbc8f-2a16-4c68-a049-1f5b271623ff" (UID: "8b6dbc8f-2a16-4c68-a049-1f5b271623ff"). InnerVolumeSpecName "kube-api-access-mcwp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:13:40.342301 master-0 kubenswrapper[28766]: I0318 09:13:40.342232 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8b6dbc8f-2a16-4c68-a049-1f5b271623ff" (UID: "8b6dbc8f-2a16-4c68-a049-1f5b271623ff"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:13:40.462023 master-0 kubenswrapper[28766]: I0318 09:13:40.458078 28766 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:13:40.462023 master-0 kubenswrapper[28766]: I0318 09:13:40.458152 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcwp6\" (UniqueName: \"kubernetes.io/projected/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-kube-api-access-mcwp6\") on node \"master-0\" DevicePath \"\"" Mar 18 09:13:40.462023 master-0 kubenswrapper[28766]: I0318 09:13:40.458170 28766 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:13:40.462023 master-0 kubenswrapper[28766]: I0318 09:13:40.458184 28766 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:13:40.462023 master-0 kubenswrapper[28766]: I0318 09:13:40.458195 28766 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:13:40.462023 master-0 kubenswrapper[28766]: I0318 09:13:40.458207 28766 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b6dbc8f-2a16-4c68-a049-1f5b271623ff-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:13:40.608875 master-0 kubenswrapper[28766]: I0318 09:13:40.607914 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-657dc898cd-mhjh7"] Mar 18 09:13:40.625875 master-0 kubenswrapper[28766]: I0318 09:13:40.623335 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-657dc898cd-mhjh7"] Mar 18 09:13:41.254711 master-0 kubenswrapper[28766]: I0318 09:13:41.254658 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6dbc8f-2a16-4c68-a049-1f5b271623ff" path="/var/lib/kubelet/pods/8b6dbc8f-2a16-4c68-a049-1f5b271623ff/volumes" Mar 18 09:13:41.258692 master-0 kubenswrapper[28766]: I0318 09:13:41.255808 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-b2qrx"] Mar 18 09:13:41.258692 master-0 kubenswrapper[28766]: E0318 09:13:41.256061 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6dbc8f-2a16-4c68-a049-1f5b271623ff" containerName="console" Mar 18 09:13:41.258692 master-0 kubenswrapper[28766]: I0318 09:13:41.256074 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6dbc8f-2a16-4c68-a049-1f5b271623ff" containerName="console" Mar 18 09:13:41.258692 master-0 kubenswrapper[28766]: I0318 09:13:41.256304 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6dbc8f-2a16-4c68-a049-1f5b271623ff" containerName="console" Mar 18 09:13:41.259042 master-0 kubenswrapper[28766]: I0318 09:13:41.258748 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b2qrx" Mar 18 09:13:41.262876 master-0 kubenswrapper[28766]: I0318 09:13:41.260973 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 18 09:13:41.262876 master-0 kubenswrapper[28766]: I0318 09:13:41.261209 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 18 09:13:41.293572 master-0 kubenswrapper[28766]: I0318 09:13:41.289996 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b2qrx"] Mar 18 09:13:41.373100 master-0 kubenswrapper[28766]: I0318 09:13:41.371099 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwhwr\" (UniqueName: \"kubernetes.io/projected/e95fe419-05c5-4d6b-ab25-2093ef4a5238-kube-api-access-jwhwr\") pod \"openstack-operator-index-b2qrx\" (UID: \"e95fe419-05c5-4d6b-ab25-2093ef4a5238\") " pod="openstack-operators/openstack-operator-index-b2qrx" Mar 18 09:13:41.472445 master-0 kubenswrapper[28766]: I0318 09:13:41.472359 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwhwr\" (UniqueName: \"kubernetes.io/projected/e95fe419-05c5-4d6b-ab25-2093ef4a5238-kube-api-access-jwhwr\") pod \"openstack-operator-index-b2qrx\" (UID: \"e95fe419-05c5-4d6b-ab25-2093ef4a5238\") " pod="openstack-operators/openstack-operator-index-b2qrx" Mar 18 09:13:41.508027 master-0 kubenswrapper[28766]: I0318 09:13:41.507904 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwhwr\" (UniqueName: \"kubernetes.io/projected/e95fe419-05c5-4d6b-ab25-2093ef4a5238-kube-api-access-jwhwr\") pod \"openstack-operator-index-b2qrx\" (UID: \"e95fe419-05c5-4d6b-ab25-2093ef4a5238\") " pod="openstack-operators/openstack-operator-index-b2qrx" Mar 18 09:13:41.655975 master-0 kubenswrapper[28766]: I0318 09:13:41.655880 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b2qrx" Mar 18 09:13:42.140128 master-0 kubenswrapper[28766]: I0318 09:13:42.139963 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b2qrx"] Mar 18 09:13:42.169564 master-0 kubenswrapper[28766]: W0318 09:13:42.165700 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode95fe419_05c5_4d6b_ab25_2093ef4a5238.slice/crio-cb501224fb9d6b438dcf0ee2fea4a5d6fc938a7ffbae189ad225c26de3683951 WatchSource:0}: Error finding container cb501224fb9d6b438dcf0ee2fea4a5d6fc938a7ffbae189ad225c26de3683951: Status 404 returned error can't find the container with id cb501224fb9d6b438dcf0ee2fea4a5d6fc938a7ffbae189ad225c26de3683951 Mar 18 09:13:42.265365 master-0 kubenswrapper[28766]: I0318 09:13:42.265299 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b2qrx" event={"ID":"e95fe419-05c5-4d6b-ab25-2093ef4a5238","Type":"ContainerStarted","Data":"cb501224fb9d6b438dcf0ee2fea4a5d6fc938a7ffbae189ad225c26de3683951"} Mar 18 09:13:44.295042 master-0 kubenswrapper[28766]: I0318 09:13:44.294969 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b2qrx" event={"ID":"e95fe419-05c5-4d6b-ab25-2093ef4a5238","Type":"ContainerStarted","Data":"445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502"} Mar 18 09:13:45.185606 master-0 kubenswrapper[28766]: I0318 09:13:45.185470 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-b2qrx" podStartSLOduration=2.415814366 podStartE2EDuration="4.185449079s" podCreationTimestamp="2026-03-18 09:13:41 +0000 UTC" firstStartedPulling="2026-03-18 09:13:42.167911667 +0000 UTC m=+575.182170333" lastFinishedPulling="2026-03-18 09:13:43.93754638 +0000 UTC m=+576.951805046" observedRunningTime="2026-03-18 09:13:44.325902698 +0000 UTC m=+577.340161364" watchObservedRunningTime="2026-03-18 09:13:45.185449079 +0000 UTC m=+578.199707755" Mar 18 09:13:45.188266 master-0 kubenswrapper[28766]: I0318 09:13:45.188211 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b2qrx"] Mar 18 09:13:45.799909 master-0 kubenswrapper[28766]: I0318 09:13:45.799803 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5jrbj"] Mar 18 09:13:45.801658 master-0 kubenswrapper[28766]: I0318 09:13:45.801610 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:13:45.814644 master-0 kubenswrapper[28766]: I0318 09:13:45.814565 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5jrbj"] Mar 18 09:13:45.985789 master-0 kubenswrapper[28766]: I0318 09:13:45.985705 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv4b7\" (UniqueName: \"kubernetes.io/projected/2c120e60-9c36-4f75-b05e-dec3101889a4-kube-api-access-sv4b7\") pod \"openstack-operator-index-5jrbj\" (UID: \"2c120e60-9c36-4f75-b05e-dec3101889a4\") " pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:13:46.088260 master-0 kubenswrapper[28766]: I0318 09:13:46.088101 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv4b7\" (UniqueName: \"kubernetes.io/projected/2c120e60-9c36-4f75-b05e-dec3101889a4-kube-api-access-sv4b7\") pod \"openstack-operator-index-5jrbj\" (UID: \"2c120e60-9c36-4f75-b05e-dec3101889a4\") " pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:13:46.107923 master-0 kubenswrapper[28766]: I0318 09:13:46.107806 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv4b7\" (UniqueName: \"kubernetes.io/projected/2c120e60-9c36-4f75-b05e-dec3101889a4-kube-api-access-sv4b7\") pod \"openstack-operator-index-5jrbj\" (UID: \"2c120e60-9c36-4f75-b05e-dec3101889a4\") " pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:13:46.125497 master-0 kubenswrapper[28766]: I0318 09:13:46.125410 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:13:46.329151 master-0 kubenswrapper[28766]: I0318 09:13:46.329053 28766 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-b2qrx" podUID="e95fe419-05c5-4d6b-ab25-2093ef4a5238" containerName="registry-server" containerID="cri-o://445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502" gracePeriod=2 Mar 18 09:13:46.616543 master-0 kubenswrapper[28766]: I0318 09:13:46.616414 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5jrbj"] Mar 18 09:13:46.650956 master-0 kubenswrapper[28766]: W0318 09:13:46.649069 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c120e60_9c36_4f75_b05e_dec3101889a4.slice/crio-6f4a518ddff239a01f02f93e6a756e1482ab8e2b7798fb24fbfae7d49794db47 WatchSource:0}: Error finding container 6f4a518ddff239a01f02f93e6a756e1482ab8e2b7798fb24fbfae7d49794db47: Status 404 returned error can't find the container with id 6f4a518ddff239a01f02f93e6a756e1482ab8e2b7798fb24fbfae7d49794db47 Mar 18 09:13:46.710224 master-0 kubenswrapper[28766]: I0318 09:13:46.710182 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b2qrx" Mar 18 09:13:46.904955 master-0 kubenswrapper[28766]: I0318 09:13:46.901834 28766 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwhwr\" (UniqueName: \"kubernetes.io/projected/e95fe419-05c5-4d6b-ab25-2093ef4a5238-kube-api-access-jwhwr\") pod \"e95fe419-05c5-4d6b-ab25-2093ef4a5238\" (UID: \"e95fe419-05c5-4d6b-ab25-2093ef4a5238\") " Mar 18 09:13:46.908842 master-0 kubenswrapper[28766]: I0318 09:13:46.908782 28766 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95fe419-05c5-4d6b-ab25-2093ef4a5238-kube-api-access-jwhwr" (OuterVolumeSpecName: "kube-api-access-jwhwr") pod "e95fe419-05c5-4d6b-ab25-2093ef4a5238" (UID: "e95fe419-05c5-4d6b-ab25-2093ef4a5238"). InnerVolumeSpecName "kube-api-access-jwhwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:13:47.004888 master-0 kubenswrapper[28766]: I0318 09:13:47.004775 28766 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwhwr\" (UniqueName: \"kubernetes.io/projected/e95fe419-05c5-4d6b-ab25-2093ef4a5238-kube-api-access-jwhwr\") on node \"master-0\" DevicePath \"\"" Mar 18 09:13:47.337880 master-0 kubenswrapper[28766]: I0318 09:13:47.337763 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5jrbj" event={"ID":"2c120e60-9c36-4f75-b05e-dec3101889a4","Type":"ContainerStarted","Data":"6f4a518ddff239a01f02f93e6a756e1482ab8e2b7798fb24fbfae7d49794db47"} Mar 18 09:13:47.340361 master-0 kubenswrapper[28766]: I0318 09:13:47.340300 28766 generic.go:334] "Generic (PLEG): container finished" podID="e95fe419-05c5-4d6b-ab25-2093ef4a5238" containerID="445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502" exitCode=0 Mar 18 09:13:47.340419 master-0 kubenswrapper[28766]: I0318 09:13:47.340366 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b2qrx" event={"ID":"e95fe419-05c5-4d6b-ab25-2093ef4a5238","Type":"ContainerDied","Data":"445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502"} Mar 18 09:13:47.340462 master-0 kubenswrapper[28766]: I0318 09:13:47.340420 28766 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b2qrx" Mar 18 09:13:47.340462 master-0 kubenswrapper[28766]: I0318 09:13:47.340444 28766 scope.go:117] "RemoveContainer" containerID="445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502" Mar 18 09:13:47.340603 master-0 kubenswrapper[28766]: I0318 09:13:47.340430 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b2qrx" event={"ID":"e95fe419-05c5-4d6b-ab25-2093ef4a5238","Type":"ContainerDied","Data":"cb501224fb9d6b438dcf0ee2fea4a5d6fc938a7ffbae189ad225c26de3683951"} Mar 18 09:13:47.370546 master-0 kubenswrapper[28766]: I0318 09:13:47.370493 28766 scope.go:117] "RemoveContainer" containerID="445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502" Mar 18 09:13:47.371523 master-0 kubenswrapper[28766]: E0318 09:13:47.371439 28766 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502\": container with ID starting with 445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502 not found: ID does not exist" containerID="445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502" Mar 18 09:13:47.371688 master-0 kubenswrapper[28766]: I0318 09:13:47.371523 28766 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502"} err="failed to get container status \"445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502\": rpc error: code = NotFound desc = could not find container \"445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502\": container with ID starting with 445f6a6d9c8861f131a381333a64be630a4be4c989ad62a1407a001693924502 not found: ID does not exist" Mar 18 09:13:47.372641 master-0 kubenswrapper[28766]: I0318 09:13:47.372591 28766 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b2qrx"] Mar 18 09:13:47.390108 master-0 kubenswrapper[28766]: I0318 09:13:47.390034 28766 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-b2qrx"] Mar 18 09:13:48.352026 master-0 kubenswrapper[28766]: I0318 09:13:48.351925 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5jrbj" event={"ID":"2c120e60-9c36-4f75-b05e-dec3101889a4","Type":"ContainerStarted","Data":"df3ecf41220a9c0993b4253f037472125eddf5a1181829cd97e6d977f4cc257e"} Mar 18 09:13:48.379866 master-0 kubenswrapper[28766]: I0318 09:13:48.379725 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5jrbj" podStartSLOduration=2.6166756490000003 podStartE2EDuration="3.37969848s" podCreationTimestamp="2026-03-18 09:13:45 +0000 UTC" firstStartedPulling="2026-03-18 09:13:46.652624583 +0000 UTC m=+579.666883249" lastFinishedPulling="2026-03-18 09:13:47.415647404 +0000 UTC m=+580.429906080" observedRunningTime="2026-03-18 09:13:48.370065604 +0000 UTC m=+581.384324280" watchObservedRunningTime="2026-03-18 09:13:48.37969848 +0000 UTC m=+581.393957156" Mar 18 09:13:49.242022 master-0 kubenswrapper[28766]: I0318 09:13:49.241953 28766 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95fe419-05c5-4d6b-ab25-2093ef4a5238" path="/var/lib/kubelet/pods/e95fe419-05c5-4d6b-ab25-2093ef4a5238/volumes" Mar 18 09:13:56.125831 master-0 kubenswrapper[28766]: I0318 09:13:56.125752 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:13:56.126523 master-0 kubenswrapper[28766]: I0318 09:13:56.125982 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:13:56.168216 master-0 kubenswrapper[28766]: I0318 09:13:56.168141 28766 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:13:56.481522 master-0 kubenswrapper[28766]: I0318 09:13:56.481319 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5jrbj" Mar 18 09:18:52.957397 master-0 kubenswrapper[28766]: I0318 09:18:52.957239 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jj4w6/must-gather-zz8pq"] Mar 18 09:18:52.958131 master-0 kubenswrapper[28766]: E0318 09:18:52.957764 28766 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95fe419-05c5-4d6b-ab25-2093ef4a5238" containerName="registry-server" Mar 18 09:18:52.958131 master-0 kubenswrapper[28766]: I0318 09:18:52.957786 28766 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95fe419-05c5-4d6b-ab25-2093ef4a5238" containerName="registry-server" Mar 18 09:18:52.958131 master-0 kubenswrapper[28766]: I0318 09:18:52.958039 28766 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95fe419-05c5-4d6b-ab25-2093ef4a5238" containerName="registry-server" Mar 18 09:18:52.958914 master-0 kubenswrapper[28766]: I0318 09:18:52.958888 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jj4w6/must-gather-zz8pq" Mar 18 09:18:52.964595 master-0 kubenswrapper[28766]: I0318 09:18:52.964560 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jj4w6"/"kube-root-ca.crt" Mar 18 09:18:52.964779 master-0 kubenswrapper[28766]: I0318 09:18:52.964755 28766 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jj4w6"/"openshift-service-ca.crt" Mar 18 09:18:52.966714 master-0 kubenswrapper[28766]: I0318 09:18:52.966654 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jj4w6/must-gather-c77mn"] Mar 18 09:18:52.968507 master-0 kubenswrapper[28766]: I0318 09:18:52.968476 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jj4w6/must-gather-c77mn" Mar 18 09:18:52.987180 master-0 kubenswrapper[28766]: I0318 09:18:52.987114 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jj4w6/must-gather-c77mn"] Mar 18 09:18:52.997937 master-0 kubenswrapper[28766]: I0318 09:18:52.997200 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jj4w6/must-gather-zz8pq"] Mar 18 09:18:53.062327 master-0 kubenswrapper[28766]: I0318 09:18:53.062272 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2vkq\" (UniqueName: \"kubernetes.io/projected/be938f8f-9219-45ee-b582-fd218cd52c08-kube-api-access-k2vkq\") pod \"must-gather-zz8pq\" (UID: \"be938f8f-9219-45ee-b582-fd218cd52c08\") " pod="openshift-must-gather-jj4w6/must-gather-zz8pq" Mar 18 09:18:53.062550 master-0 kubenswrapper[28766]: I0318 09:18:53.062419 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/be938f8f-9219-45ee-b582-fd218cd52c08-must-gather-output\") pod \"must-gather-zz8pq\" (UID: \"be938f8f-9219-45ee-b582-fd218cd52c08\") " pod="openshift-must-gather-jj4w6/must-gather-zz8pq" Mar 18 09:18:53.164132 master-0 kubenswrapper[28766]: I0318 09:18:53.164063 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5b16a1b5-290c-40eb-a81a-61ec3bc337a1-must-gather-output\") pod \"must-gather-c77mn\" (UID: \"5b16a1b5-290c-40eb-a81a-61ec3bc337a1\") " pod="openshift-must-gather-jj4w6/must-gather-c77mn" Mar 18 09:18:53.164372 master-0 kubenswrapper[28766]: I0318 09:18:53.164201 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/be938f8f-9219-45ee-b582-fd218cd52c08-must-gather-output\") pod \"must-gather-zz8pq\" (UID: \"be938f8f-9219-45ee-b582-fd218cd52c08\") " pod="openshift-must-gather-jj4w6/must-gather-zz8pq" Mar 18 09:18:53.164372 master-0 kubenswrapper[28766]: I0318 09:18:53.164283 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2vkq\" (UniqueName: \"kubernetes.io/projected/be938f8f-9219-45ee-b582-fd218cd52c08-kube-api-access-k2vkq\") pod \"must-gather-zz8pq\" (UID: \"be938f8f-9219-45ee-b582-fd218cd52c08\") " pod="openshift-must-gather-jj4w6/must-gather-zz8pq" Mar 18 09:18:53.164372 master-0 kubenswrapper[28766]: I0318 09:18:53.164313 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjjtz\" (UniqueName: \"kubernetes.io/projected/5b16a1b5-290c-40eb-a81a-61ec3bc337a1-kube-api-access-tjjtz\") pod \"must-gather-c77mn\" (UID: \"5b16a1b5-290c-40eb-a81a-61ec3bc337a1\") " pod="openshift-must-gather-jj4w6/must-gather-c77mn" Mar 18 09:18:53.164659 master-0 kubenswrapper[28766]: I0318 09:18:53.164618 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/be938f8f-9219-45ee-b582-fd218cd52c08-must-gather-output\") pod \"must-gather-zz8pq\" (UID: \"be938f8f-9219-45ee-b582-fd218cd52c08\") " pod="openshift-must-gather-jj4w6/must-gather-zz8pq" Mar 18 09:18:53.186146 master-0 kubenswrapper[28766]: I0318 09:18:53.186093 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2vkq\" (UniqueName: \"kubernetes.io/projected/be938f8f-9219-45ee-b582-fd218cd52c08-kube-api-access-k2vkq\") pod \"must-gather-zz8pq\" (UID: \"be938f8f-9219-45ee-b582-fd218cd52c08\") " pod="openshift-must-gather-jj4w6/must-gather-zz8pq" Mar 18 09:18:53.265986 master-0 kubenswrapper[28766]: I0318 09:18:53.265927 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5b16a1b5-290c-40eb-a81a-61ec3bc337a1-must-gather-output\") pod \"must-gather-c77mn\" (UID: \"5b16a1b5-290c-40eb-a81a-61ec3bc337a1\") " pod="openshift-must-gather-jj4w6/must-gather-c77mn" Mar 18 09:18:53.266256 master-0 kubenswrapper[28766]: I0318 09:18:53.266077 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjjtz\" (UniqueName: \"kubernetes.io/projected/5b16a1b5-290c-40eb-a81a-61ec3bc337a1-kube-api-access-tjjtz\") pod \"must-gather-c77mn\" (UID: \"5b16a1b5-290c-40eb-a81a-61ec3bc337a1\") " pod="openshift-must-gather-jj4w6/must-gather-c77mn" Mar 18 09:18:53.266422 master-0 kubenswrapper[28766]: I0318 09:18:53.266389 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5b16a1b5-290c-40eb-a81a-61ec3bc337a1-must-gather-output\") pod \"must-gather-c77mn\" (UID: \"5b16a1b5-290c-40eb-a81a-61ec3bc337a1\") " pod="openshift-must-gather-jj4w6/must-gather-c77mn" Mar 18 09:18:53.284729 master-0 kubenswrapper[28766]: I0318 09:18:53.282543 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjjtz\" (UniqueName: \"kubernetes.io/projected/5b16a1b5-290c-40eb-a81a-61ec3bc337a1-kube-api-access-tjjtz\") pod \"must-gather-c77mn\" (UID: \"5b16a1b5-290c-40eb-a81a-61ec3bc337a1\") " pod="openshift-must-gather-jj4w6/must-gather-c77mn" Mar 18 09:18:53.284729 master-0 kubenswrapper[28766]: I0318 09:18:53.283163 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jj4w6/must-gather-zz8pq" Mar 18 09:18:53.295332 master-0 kubenswrapper[28766]: I0318 09:18:53.295271 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jj4w6/must-gather-c77mn" Mar 18 09:18:53.765524 master-0 kubenswrapper[28766]: I0318 09:18:53.765467 28766 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 09:18:53.779298 master-0 kubenswrapper[28766]: I0318 09:18:53.779088 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jj4w6/must-gather-zz8pq"] Mar 18 09:18:53.808580 master-0 kubenswrapper[28766]: I0318 09:18:53.808457 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jj4w6/must-gather-c77mn"] Mar 18 09:18:53.825213 master-0 kubenswrapper[28766]: W0318 09:18:53.824599 28766 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b16a1b5_290c_40eb_a81a_61ec3bc337a1.slice/crio-912808b3ad421db7bd187b61faf59f89d4bf131ad3be386d51e6f7834ee034c2 WatchSource:0}: Error finding container 912808b3ad421db7bd187b61faf59f89d4bf131ad3be386d51e6f7834ee034c2: Status 404 returned error can't find the container with id 912808b3ad421db7bd187b61faf59f89d4bf131ad3be386d51e6f7834ee034c2 Mar 18 09:18:53.934821 master-0 kubenswrapper[28766]: I0318 09:18:53.934286 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jj4w6/must-gather-c77mn" event={"ID":"5b16a1b5-290c-40eb-a81a-61ec3bc337a1","Type":"ContainerStarted","Data":"912808b3ad421db7bd187b61faf59f89d4bf131ad3be386d51e6f7834ee034c2"} Mar 18 09:18:53.937681 master-0 kubenswrapper[28766]: I0318 09:18:53.937627 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jj4w6/must-gather-zz8pq" event={"ID":"be938f8f-9219-45ee-b582-fd218cd52c08","Type":"ContainerStarted","Data":"b37da62d2f93c24590e741671d12f97c5e6a10f07c3ddeae1cabc881d9571815"} Mar 18 09:18:55.960289 master-0 kubenswrapper[28766]: I0318 09:18:55.959296 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jj4w6/must-gather-c77mn" event={"ID":"5b16a1b5-290c-40eb-a81a-61ec3bc337a1","Type":"ContainerStarted","Data":"b8f2303bbaa1dd2136a8f69cfeee9a1935934ddaa30d60f2bbe5d1bb76fd33dd"} Mar 18 09:18:55.960289 master-0 kubenswrapper[28766]: I0318 09:18:55.959370 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jj4w6/must-gather-c77mn" event={"ID":"5b16a1b5-290c-40eb-a81a-61ec3bc337a1","Type":"ContainerStarted","Data":"37ca28227352525aaafd595a700517ca6a8e1a98bbd6ea73215f3efdd76fcd9b"} Mar 18 09:18:55.994029 master-0 kubenswrapper[28766]: I0318 09:18:55.992060 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jj4w6/must-gather-c77mn" podStartSLOduration=2.7618597879999998 podStartE2EDuration="3.992038315s" podCreationTimestamp="2026-03-18 09:18:52 +0000 UTC" firstStartedPulling="2026-03-18 09:18:53.830722949 +0000 UTC m=+886.844981615" lastFinishedPulling="2026-03-18 09:18:55.060901486 +0000 UTC m=+888.075160142" observedRunningTime="2026-03-18 09:18:55.979390205 +0000 UTC m=+888.993648871" watchObservedRunningTime="2026-03-18 09:18:55.992038315 +0000 UTC m=+889.006296981" Mar 18 09:18:59.043204 master-0 kubenswrapper[28766]: I0318 09:18:59.043132 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-8btcx_8d89af2f-47f5-4ee5-a790-e162c2dee3ce/cluster-version-operator/0.log" Mar 18 09:19:00.656884 master-0 kubenswrapper[28766]: I0318 09:19:00.656262 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-tv685_d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf/nmstate-console-plugin/0.log" Mar 18 09:19:00.672282 master-0 kubenswrapper[28766]: I0318 09:19:00.672012 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sngqk_f17c8a3e-2a67-4ca4-80d6-ae4177b03359/nmstate-handler/0.log" Mar 18 09:19:00.692266 master-0 kubenswrapper[28766]: I0318 09:19:00.692216 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-882nf_0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae/nmstate-metrics/0.log" Mar 18 09:19:00.746512 master-0 kubenswrapper[28766]: I0318 09:19:00.746467 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-882nf_0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae/kube-rbac-proxy/0.log" Mar 18 09:19:00.763373 master-0 kubenswrapper[28766]: I0318 09:19:00.762017 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-p6fqw_c41ac234-9a6f-410f-b4f1-1825ada66e14/nmstate-operator/0.log" Mar 18 09:19:00.775926 master-0 kubenswrapper[28766]: I0318 09:19:00.775386 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-6r6gh_2de37539-f3d7-47cd-a12e-4285ac38f0db/nmstate-webhook/0.log" Mar 18 09:19:00.929980 master-0 kubenswrapper[28766]: I0318 09:19:00.924971 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-vthq7_79b20ae6-1660-40b8-9a44-2a3989042d82/controller/0.log" Mar 18 09:19:00.931929 master-0 kubenswrapper[28766]: I0318 09:19:00.931019 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-vthq7_79b20ae6-1660-40b8-9a44-2a3989042d82/kube-rbac-proxy/0.log" Mar 18 09:19:00.971890 master-0 kubenswrapper[28766]: I0318 09:19:00.971515 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/controller/0.log" Mar 18 09:19:01.033893 master-0 kubenswrapper[28766]: I0318 09:19:01.032914 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/frr/0.log" Mar 18 09:19:01.055877 master-0 kubenswrapper[28766]: I0318 09:19:01.049941 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/reloader/0.log" Mar 18 09:19:01.063497 master-0 kubenswrapper[28766]: I0318 09:19:01.059186 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/frr-metrics/0.log" Mar 18 09:19:01.068590 master-0 kubenswrapper[28766]: I0318 09:19:01.068537 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/kube-rbac-proxy/0.log" Mar 18 09:19:01.085897 master-0 kubenswrapper[28766]: I0318 09:19:01.085041 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/kube-rbac-proxy-frr/0.log" Mar 18 09:19:01.098901 master-0 kubenswrapper[28766]: I0318 09:19:01.098475 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-frr-files/0.log" Mar 18 09:19:01.120757 master-0 kubenswrapper[28766]: I0318 09:19:01.119114 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-reloader/0.log" Mar 18 09:19:01.130328 master-0 kubenswrapper[28766]: I0318 09:19:01.130273 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-metrics/0.log" Mar 18 09:19:01.173875 master-0 kubenswrapper[28766]: I0318 09:19:01.164242 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nqlq6_54a7f143-e51f-475d-9c2d-21f1c3979705/frr-k8s-webhook-server/0.log" Mar 18 09:19:01.220356 master-0 kubenswrapper[28766]: I0318 09:19:01.218329 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65f5d58555-j282b_27eeeb04-faa9-4d56-81fa-a890a202cdd4/manager/0.log" Mar 18 09:19:01.235462 master-0 kubenswrapper[28766]: I0318 09:19:01.235416 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-88b68f8d8-w9g9k_f8b3af47-0f7b-422a-905a-0e3e139e2f7e/webhook-server/0.log" Mar 18 09:19:01.326058 master-0 kubenswrapper[28766]: I0318 09:19:01.326007 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g7bjn_8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc/speaker/0.log" Mar 18 09:19:01.335517 master-0 kubenswrapper[28766]: I0318 09:19:01.334871 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g7bjn_8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc/kube-rbac-proxy/0.log" Mar 18 09:19:04.022304 master-0 kubenswrapper[28766]: I0318 09:19:04.022248 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 09:19:04.100981 master-0 kubenswrapper[28766]: I0318 09:19:04.100937 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-79657f7847-bxc9l_a3e7d74a-e02d-419b-b85a-ee0304f06ad4/oauth-openshift/0.log" Mar 18 09:19:04.127735 master-0 kubenswrapper[28766]: I0318 09:19:04.126295 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 09:19:04.148806 master-0 kubenswrapper[28766]: I0318 09:19:04.148762 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 09:19:04.182342 master-0 kubenswrapper[28766]: I0318 09:19:04.182294 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 09:19:04.212174 master-0 kubenswrapper[28766]: I0318 09:19:04.210709 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 09:19:04.231247 master-0 kubenswrapper[28766]: I0318 09:19:04.231195 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 09:19:04.250702 master-0 kubenswrapper[28766]: I0318 09:19:04.250662 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 09:19:04.270980 master-0 kubenswrapper[28766]: I0318 09:19:04.270928 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 09:19:04.320603 master-0 kubenswrapper[28766]: I0318 09:19:04.320479 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_1ecff6b2-dbd4-4366-873b-2170d0b76c0f/installer/0.log" Mar 18 09:19:04.363505 master-0 kubenswrapper[28766]: I0318 09:19:04.363467 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_005a0b4c-8e2d-4483-98e9-55badf7099c5/installer/0.log" Mar 18 09:19:05.410492 master-0 kubenswrapper[28766]: I0318 09:19:05.409370 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-5g8tz_c110b293-2c6b-496b-b015-23aada98cb4b/authentication-operator/0.log" Mar 18 09:19:05.434162 master-0 kubenswrapper[28766]: I0318 09:19:05.432768 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-zq2ds_97215428-2d5d-460f-947c-f2a490bc428d/assisted-installer-controller/0.log" Mar 18 09:19:05.436790 master-0 kubenswrapper[28766]: I0318 09:19:05.436754 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-5g8tz_c110b293-2c6b-496b-b015-23aada98cb4b/authentication-operator/1.log" Mar 18 09:19:06.298810 master-0 kubenswrapper[28766]: I0318 09:19:06.298757 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-8sbgd_ad4cf9b2-4e66-4921-a30c-7b659bff06ab/router/4.log" Mar 18 09:19:06.307432 master-0 kubenswrapper[28766]: I0318 09:19:06.307392 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-8sbgd_ad4cf9b2-4e66-4921-a30c-7b659bff06ab/router/3.log" Mar 18 09:19:06.371743 master-0 kubenswrapper[28766]: E0318 09:19:06.371654 28766 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:48898->192.168.32.10:37755: write tcp 192.168.32.10:48898->192.168.32.10:37755: write: broken pipe Mar 18 09:19:06.909020 master-0 kubenswrapper[28766]: I0318 09:19:06.908970 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-556c8fbcff-5shs8_2700f537-8f31-4380-a527-3e697a8122cc/oauth-apiserver/0.log" Mar 18 09:19:06.919921 master-0 kubenswrapper[28766]: I0318 09:19:06.919884 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-556c8fbcff-5shs8_2700f537-8f31-4380-a527-3e697a8122cc/fix-audit-permissions/0.log" Mar 18 09:19:07.133921 master-0 kubenswrapper[28766]: I0318 09:19:07.133813 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jj4w6/must-gather-zz8pq" event={"ID":"be938f8f-9219-45ee-b582-fd218cd52c08","Type":"ContainerStarted","Data":"2e246d82088cbb001c6db6e3d85362cf9217290e8d537df37f8b8c59d9de603f"} Mar 18 09:19:07.448240 master-0 kubenswrapper[28766]: I0318 09:19:07.448148 28766 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7"] Mar 18 09:19:07.449157 master-0 kubenswrapper[28766]: I0318 09:19:07.449132 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.467002 master-0 kubenswrapper[28766]: I0318 09:19:07.466920 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7"] Mar 18 09:19:07.470544 master-0 kubenswrapper[28766]: I0318 09:19:07.470496 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lxj7x_ffc5379c-651f-490c-90f4-1285b9093596/kube-rbac-proxy/0.log" Mar 18 09:19:07.533692 master-0 kubenswrapper[28766]: I0318 09:19:07.533599 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lxj7x_ffc5379c-651f-490c-90f4-1285b9093596/cluster-autoscaler-operator/0.log" Mar 18 09:19:07.548847 master-0 kubenswrapper[28766]: I0318 09:19:07.548802 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/2.log" Mar 18 09:19:07.549780 master-0 kubenswrapper[28766]: I0318 09:19:07.549727 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/1.log" Mar 18 09:19:07.561340 master-0 kubenswrapper[28766]: I0318 09:19:07.561293 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/baremetal-kube-rbac-proxy/0.log" Mar 18 09:19:07.569278 master-0 kubenswrapper[28766]: I0318 09:19:07.569225 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-proc\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.569419 master-0 kubenswrapper[28766]: I0318 09:19:07.569323 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-sys\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.569462 master-0 kubenswrapper[28766]: I0318 09:19:07.569422 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-podres\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.569462 master-0 kubenswrapper[28766]: I0318 09:19:07.569444 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9wn\" (UniqueName: \"kubernetes.io/projected/ce70df41-e4f0-428f-a447-c6f7ae433acf-kube-api-access-vh9wn\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.569538 master-0 kubenswrapper[28766]: I0318 09:19:07.569469 28766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-lib-modules\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.585136 master-0 kubenswrapper[28766]: I0318 09:19:07.585086 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-z9n9c_d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/control-plane-machine-set-operator/0.log" Mar 18 09:19:07.587762 master-0 kubenswrapper[28766]: I0318 09:19:07.587736 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-z9n9c_d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/control-plane-machine-set-operator/1.log" Mar 18 09:19:07.607538 master-0 kubenswrapper[28766]: I0318 09:19:07.607510 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-z6nw9_b9768e50-c883-47b0-b319-851fa53ac19a/kube-rbac-proxy/0.log" Mar 18 09:19:07.624997 master-0 kubenswrapper[28766]: I0318 09:19:07.624963 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-z6nw9_b9768e50-c883-47b0-b319-851fa53ac19a/machine-api-operator/0.log" Mar 18 09:19:07.670965 master-0 kubenswrapper[28766]: I0318 09:19:07.670924 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-podres\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.671207 master-0 kubenswrapper[28766]: I0318 09:19:07.671190 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9wn\" (UniqueName: \"kubernetes.io/projected/ce70df41-e4f0-428f-a447-c6f7ae433acf-kube-api-access-vh9wn\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.671291 master-0 kubenswrapper[28766]: I0318 09:19:07.671276 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-lib-modules\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.671398 master-0 kubenswrapper[28766]: I0318 09:19:07.671383 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-proc\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.671523 master-0 kubenswrapper[28766]: I0318 09:19:07.671124 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-podres\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.671643 master-0 kubenswrapper[28766]: I0318 09:19:07.671585 28766 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-sys\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.671700 master-0 kubenswrapper[28766]: I0318 09:19:07.671654 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-proc\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.671813 master-0 kubenswrapper[28766]: I0318 09:19:07.671798 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-sys\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.671992 master-0 kubenswrapper[28766]: I0318 09:19:07.671968 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce70df41-e4f0-428f-a447-c6f7ae433acf-lib-modules\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.687317 master-0 kubenswrapper[28766]: I0318 09:19:07.687248 28766 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9wn\" (UniqueName: \"kubernetes.io/projected/ce70df41-e4f0-428f-a447-c6f7ae433acf-kube-api-access-vh9wn\") pod \"perf-node-gather-daemonset-8ckb7\" (UID: \"ce70df41-e4f0-428f-a447-c6f7ae433acf\") " pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:07.764531 master-0 kubenswrapper[28766]: I0318 09:19:07.764471 28766 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:08.165110 master-0 kubenswrapper[28766]: I0318 09:19:08.164983 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jj4w6/must-gather-zz8pq" event={"ID":"be938f8f-9219-45ee-b582-fd218cd52c08","Type":"ContainerStarted","Data":"bac102501e26b069560ab0f91b6e90db600434b60e53da7ba2fe1419579a1791"} Mar 18 09:19:08.238664 master-0 kubenswrapper[28766]: I0318 09:19:08.238580 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jj4w6/must-gather-zz8pq" podStartSLOduration=3.304265599 podStartE2EDuration="16.238561527s" podCreationTimestamp="2026-03-18 09:18:52 +0000 UTC" firstStartedPulling="2026-03-18 09:18:53.765165359 +0000 UTC m=+886.779424025" lastFinishedPulling="2026-03-18 09:19:06.699461277 +0000 UTC m=+899.713719953" observedRunningTime="2026-03-18 09:19:08.18912293 +0000 UTC m=+901.203381596" watchObservedRunningTime="2026-03-18 09:19:08.238561527 +0000 UTC m=+901.252820183" Mar 18 09:19:08.239259 master-0 kubenswrapper[28766]: I0318 09:19:08.239241 28766 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7"] Mar 18 09:19:08.536973 master-0 kubenswrapper[28766]: I0318 09:19:08.536923 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/cluster-cloud-controller-manager/0.log" Mar 18 09:19:08.537298 master-0 kubenswrapper[28766]: I0318 09:19:08.537257 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/cluster-cloud-controller-manager/1.log" Mar 18 09:19:08.553494 master-0 kubenswrapper[28766]: I0318 09:19:08.553440 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/config-sync-controllers/0.log" Mar 18 09:19:08.556005 master-0 kubenswrapper[28766]: I0318 09:19:08.555952 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/config-sync-controllers/1.log" Mar 18 09:19:08.585840 master-0 kubenswrapper[28766]: I0318 09:19:08.585753 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-9xtls_ccf74af5-d4fd-4ed3-9784-42397ea798c5/kube-rbac-proxy/0.log" Mar 18 09:19:09.188919 master-0 kubenswrapper[28766]: I0318 09:19:09.188861 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" event={"ID":"ce70df41-e4f0-428f-a447-c6f7ae433acf","Type":"ContainerStarted","Data":"8485170264a004cf17dd966bbe85ae2d3850190186e7d3a73811dd0b23cbc601"} Mar 18 09:19:09.188919 master-0 kubenswrapper[28766]: I0318 09:19:09.188922 28766 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" event={"ID":"ce70df41-e4f0-428f-a447-c6f7ae433acf","Type":"ContainerStarted","Data":"30bde49bc900349e0712a7e8ef673e4b4fe206859f388de021fd655c9c33396e"} Mar 18 09:19:09.190192 master-0 kubenswrapper[28766]: I0318 09:19:09.190144 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:09.213991 master-0 kubenswrapper[28766]: I0318 09:19:09.213921 28766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" podStartSLOduration=2.213905072 podStartE2EDuration="2.213905072s" podCreationTimestamp="2026-03-18 09:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:19:09.212829643 +0000 UTC m=+902.227088309" watchObservedRunningTime="2026-03-18 09:19:09.213905072 +0000 UTC m=+902.228163738" Mar 18 09:19:10.070589 master-0 kubenswrapper[28766]: I0318 09:19:10.070535 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-v8ft8_e64ea71a-1e89-409a-9607-4d3cea093643/kube-rbac-proxy/0.log" Mar 18 09:19:10.105075 master-0 kubenswrapper[28766]: I0318 09:19:10.104984 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-v8ft8_e64ea71a-1e89-409a-9607-4d3cea093643/cloud-credential-operator/0.log" Mar 18 09:19:11.436158 master-0 kubenswrapper[28766]: I0318 09:19:11.436081 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/3.log" Mar 18 09:19:11.445885 master-0 kubenswrapper[28766]: I0318 09:19:11.445801 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-config-operator/4.log" Mar 18 09:19:11.457289 master-0 kubenswrapper[28766]: I0318 09:19:11.457219 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-7kfrh_573d3a02-e395-4816-963a-cd614ef53f75/openshift-api/0.log" Mar 18 09:19:12.161820 master-0 kubenswrapper[28766]: I0318 09:19:12.161761 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-hmnwh_c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075/console-operator/1.log" Mar 18 09:19:12.193459 master-0 kubenswrapper[28766]: I0318 09:19:12.193389 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-hmnwh_c7d313bd-ea1e-4ebf-a6a9-4e17ae4e4075/console-operator/2.log" Mar 18 09:19:12.770694 master-0 kubenswrapper[28766]: I0318 09:19:12.770636 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd9fbc6c9-5fb2s_9c8fd6d0-1769-42fe-9d88-26640a4a3c2f/console/0.log" Mar 18 09:19:12.789406 master-0 kubenswrapper[28766]: I0318 09:19:12.789358 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-66b8ffb895-mjnxk_0aeda1f0-6438-4d96-becd-e0cd833e99d5/download-server/0.log" Mar 18 09:19:13.025401 master-0 kubenswrapper[28766]: I0318 09:19:13.025256 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-vthq7_79b20ae6-1660-40b8-9a44-2a3989042d82/controller/0.log" Mar 18 09:19:13.032923 master-0 kubenswrapper[28766]: I0318 09:19:13.032801 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-vthq7_79b20ae6-1660-40b8-9a44-2a3989042d82/kube-rbac-proxy/0.log" Mar 18 09:19:13.051157 master-0 kubenswrapper[28766]: I0318 09:19:13.051103 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/controller/0.log" Mar 18 09:19:13.133433 master-0 kubenswrapper[28766]: I0318 09:19:13.133396 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/frr/0.log" Mar 18 09:19:13.141796 master-0 kubenswrapper[28766]: I0318 09:19:13.141759 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/reloader/0.log" Mar 18 09:19:13.152738 master-0 kubenswrapper[28766]: I0318 09:19:13.152686 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/frr-metrics/0.log" Mar 18 09:19:13.162159 master-0 kubenswrapper[28766]: I0318 09:19:13.162113 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/kube-rbac-proxy/0.log" Mar 18 09:19:13.170463 master-0 kubenswrapper[28766]: I0318 09:19:13.170407 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/kube-rbac-proxy-frr/0.log" Mar 18 09:19:13.177313 master-0 kubenswrapper[28766]: I0318 09:19:13.177277 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-frr-files/0.log" Mar 18 09:19:13.188005 master-0 kubenswrapper[28766]: I0318 09:19:13.187968 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-reloader/0.log" Mar 18 09:19:13.193340 master-0 kubenswrapper[28766]: I0318 09:19:13.193302 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-metrics/0.log" Mar 18 09:19:13.203369 master-0 kubenswrapper[28766]: I0318 09:19:13.203325 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nqlq6_54a7f143-e51f-475d-9c2d-21f1c3979705/frr-k8s-webhook-server/0.log" Mar 18 09:19:13.236560 master-0 kubenswrapper[28766]: I0318 09:19:13.236516 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65f5d58555-j282b_27eeeb04-faa9-4d56-81fa-a890a202cdd4/manager/0.log" Mar 18 09:19:13.250067 master-0 kubenswrapper[28766]: I0318 09:19:13.248465 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-88b68f8d8-w9g9k_f8b3af47-0f7b-422a-905a-0e3e139e2f7e/webhook-server/0.log" Mar 18 09:19:13.322015 master-0 kubenswrapper[28766]: I0318 09:19:13.321881 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g7bjn_8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc/speaker/0.log" Mar 18 09:19:13.328944 master-0 kubenswrapper[28766]: I0318 09:19:13.328901 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g7bjn_8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc/kube-rbac-proxy/0.log" Mar 18 09:19:13.495308 master-0 kubenswrapper[28766]: I0318 09:19:13.494685 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-srhr6_fc5a9875-d97e-4371-a15d-a1f43b85abce/cluster-storage-operator/0.log" Mar 18 09:19:13.516708 master-0 kubenswrapper[28766]: I0318 09:19:13.516668 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/3.log" Mar 18 09:19:13.517035 master-0 kubenswrapper[28766]: I0318 09:19:13.516794 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-khm5n_29ba6765-61c9-4f78-8f44-570418000c5c/snapshot-controller/4.log" Mar 18 09:19:13.545599 master-0 kubenswrapper[28766]: I0318 09:19:13.545539 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-j8kgj_6fb1f871-9c24-48a1-a15a-a636b5bb687d/csi-snapshot-controller-operator/0.log" Mar 18 09:19:14.103034 master-0 kubenswrapper[28766]: I0318 09:19:14.102979 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-b9pn7_e025d334-20e7-491f-8027-194251398747/dns-operator/0.log" Mar 18 09:19:14.116042 master-0 kubenswrapper[28766]: I0318 09:19:14.115988 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-b9pn7_e025d334-20e7-491f-8027-194251398747/kube-rbac-proxy/0.log" Mar 18 09:19:14.713767 master-0 kubenswrapper[28766]: I0318 09:19:14.713722 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-ck7b5_b35ab145-16a7-4ef1-86e8-0afb6ff469fd/dns/0.log" Mar 18 09:19:14.728742 master-0 kubenswrapper[28766]: I0318 09:19:14.728694 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-ck7b5_b35ab145-16a7-4ef1-86e8-0afb6ff469fd/kube-rbac-proxy/0.log" Mar 18 09:19:14.742301 master-0 kubenswrapper[28766]: I0318 09:19:14.742263 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-zwl77_68465463-5f2a-4e74-9c34-2706a185f7ea/dns-node-resolver/0.log" Mar 18 09:19:15.338163 master-0 kubenswrapper[28766]: I0318 09:19:15.337181 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f4jvq_939efa41-8f40-4f91-bee4-0425aead9760/etcd-operator/1.log" Mar 18 09:19:15.354666 master-0 kubenswrapper[28766]: I0318 09:19:15.354612 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f4jvq_939efa41-8f40-4f91-bee4-0425aead9760/etcd-operator/0.log" Mar 18 09:19:15.527259 master-0 kubenswrapper[28766]: I0318 09:19:15.527203 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5jrbj_2c120e60-9c36-4f75-b05e-dec3101889a4/registry-server/0.log" Mar 18 09:19:15.945386 master-0 kubenswrapper[28766]: I0318 09:19:15.945311 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 09:19:16.051756 master-0 kubenswrapper[28766]: I0318 09:19:16.051679 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 09:19:16.067917 master-0 kubenswrapper[28766]: I0318 09:19:16.067867 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 09:19:16.080309 master-0 kubenswrapper[28766]: I0318 09:19:16.080258 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 09:19:16.098229 master-0 kubenswrapper[28766]: I0318 09:19:16.098165 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 09:19:16.114800 master-0 kubenswrapper[28766]: I0318 09:19:16.114743 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 09:19:16.161668 master-0 kubenswrapper[28766]: I0318 09:19:16.159293 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 09:19:16.180053 master-0 kubenswrapper[28766]: I0318 09:19:16.179997 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 09:19:16.228585 master-0 kubenswrapper[28766]: I0318 09:19:16.228441 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_1ecff6b2-dbd4-4366-873b-2170d0b76c0f/installer/0.log" Mar 18 09:19:16.273134 master-0 kubenswrapper[28766]: I0318 09:19:16.273080 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_005a0b4c-8e2d-4483-98e9-55badf7099c5/installer/0.log" Mar 18 09:19:17.009670 master-0 kubenswrapper[28766]: I0318 09:19:17.009614 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-vxsth_7962fb40-1170-4c00-b1bf-92966aeae807/cluster-image-registry-operator/0.log" Mar 18 09:19:17.023830 master-0 kubenswrapper[28766]: I0318 09:19:17.023792 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-gfsvj_5c921938-2ae3-4b48-838b-14822da65961/node-ca/0.log" Mar 18 09:19:17.517938 master-0 kubenswrapper[28766]: I0318 09:19:17.517891 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/4.log" Mar 18 09:19:17.528070 master-0 kubenswrapper[28766]: I0318 09:19:17.528008 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/ingress-operator/5.log" Mar 18 09:19:17.537999 master-0 kubenswrapper[28766]: I0318 09:19:17.537953 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-7h94d_94e9e4fc-bfaa-4ba5-ad6d-fe76b91932e9/kube-rbac-proxy/0.log" Mar 18 09:19:17.786715 master-0 kubenswrapper[28766]: I0318 09:19:17.785913 28766 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-jj4w6/perf-node-gather-daemonset-8ckb7" Mar 18 09:19:18.056924 master-0 kubenswrapper[28766]: I0318 09:19:18.056784 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-mpw9b_d0272f7c-bedc-44cf-9790-88e10e6dda03/serve-healthcheck-canary/0.log" Mar 18 09:19:18.717047 master-0 kubenswrapper[28766]: I0318 09:19:18.716993 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-kv7n5_31a92270-efed-44fe-871e-90333235e85f/insights-operator/0.log" Mar 18 09:19:20.234642 master-0 kubenswrapper[28766]: I0318 09:19:20.234580 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_aaac568a-d210-428c-aef8-a9615d21e86e/alertmanager/0.log" Mar 18 09:19:20.250563 master-0 kubenswrapper[28766]: I0318 09:19:20.250500 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_aaac568a-d210-428c-aef8-a9615d21e86e/config-reloader/0.log" Mar 18 09:19:20.263558 master-0 kubenswrapper[28766]: I0318 09:19:20.263499 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_aaac568a-d210-428c-aef8-a9615d21e86e/kube-rbac-proxy-web/0.log" Mar 18 09:19:20.276871 master-0 kubenswrapper[28766]: I0318 09:19:20.276813 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_aaac568a-d210-428c-aef8-a9615d21e86e/kube-rbac-proxy/0.log" Mar 18 09:19:20.288847 master-0 kubenswrapper[28766]: I0318 09:19:20.288806 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_aaac568a-d210-428c-aef8-a9615d21e86e/kube-rbac-proxy-metric/0.log" Mar 18 09:19:20.303074 master-0 kubenswrapper[28766]: I0318 09:19:20.303004 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_aaac568a-d210-428c-aef8-a9615d21e86e/prom-label-proxy/0.log" Mar 18 09:19:20.315094 master-0 kubenswrapper[28766]: I0318 09:19:20.314996 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_aaac568a-d210-428c-aef8-a9615d21e86e/init-config-reloader/0.log" Mar 18 09:19:20.366357 master-0 kubenswrapper[28766]: I0318 09:19:20.366297 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-nc7hf_e7b72267-fc08-41ed-a92b-9fca7372aba6/cluster-monitoring-operator/0.log" Mar 18 09:19:20.384921 master-0 kubenswrapper[28766]: I0318 09:19:20.384505 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-dblgh_91a6fa86-8c58-43bc-a2d4-2b20901269f7/kube-state-metrics/0.log" Mar 18 09:19:20.401714 master-0 kubenswrapper[28766]: I0318 09:19:20.400796 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-dblgh_91a6fa86-8c58-43bc-a2d4-2b20901269f7/kube-rbac-proxy-main/0.log" Mar 18 09:19:20.416431 master-0 kubenswrapper[28766]: I0318 09:19:20.416366 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-dblgh_91a6fa86-8c58-43bc-a2d4-2b20901269f7/kube-rbac-proxy-self/0.log" Mar 18 09:19:20.430552 master-0 kubenswrapper[28766]: I0318 09:19:20.430506 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-547c985987-bff72_a67829d2-585d-4140-aaa7-c7551bb714d3/metrics-server/0.log" Mar 18 09:19:20.455449 master-0 kubenswrapper[28766]: I0318 09:19:20.455405 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-9f7b5f8d5-t5nk8_b0fd6d5a-c72f-4c6d-ad2a-5425fb010fcb/monitoring-plugin/0.log" Mar 18 09:19:20.485666 master-0 kubenswrapper[28766]: I0318 09:19:20.485592 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-75szk_4146a62d-e37b-4295-90ca-b23f5e3d1112/node-exporter/0.log" Mar 18 09:19:20.508950 master-0 kubenswrapper[28766]: I0318 09:19:20.508829 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-75szk_4146a62d-e37b-4295-90ca-b23f5e3d1112/kube-rbac-proxy/0.log" Mar 18 09:19:20.528880 master-0 kubenswrapper[28766]: I0318 09:19:20.528786 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-75szk_4146a62d-e37b-4295-90ca-b23f5e3d1112/init-textfile/0.log" Mar 18 09:19:20.549383 master-0 kubenswrapper[28766]: I0318 09:19:20.549312 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-dsq5f_06cbd48a-1f1d-4734-8d57-e1b6824879b6/kube-rbac-proxy-main/0.log" Mar 18 09:19:20.563341 master-0 kubenswrapper[28766]: I0318 09:19:20.563289 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-dsq5f_06cbd48a-1f1d-4734-8d57-e1b6824879b6/kube-rbac-proxy-self/0.log" Mar 18 09:19:20.586888 master-0 kubenswrapper[28766]: I0318 09:19:20.586657 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-dsq5f_06cbd48a-1f1d-4734-8d57-e1b6824879b6/openshift-state-metrics/0.log" Mar 18 09:19:20.623403 master-0 kubenswrapper[28766]: I0318 09:19:20.623351 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b778f3f5-3686-49f7-aa43-93a9d9d2d963/prometheus/0.log" Mar 18 09:19:20.639005 master-0 kubenswrapper[28766]: I0318 09:19:20.638948 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b778f3f5-3686-49f7-aa43-93a9d9d2d963/config-reloader/0.log" Mar 18 09:19:20.655448 master-0 kubenswrapper[28766]: I0318 09:19:20.655395 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b778f3f5-3686-49f7-aa43-93a9d9d2d963/thanos-sidecar/0.log" Mar 18 09:19:20.670233 master-0 kubenswrapper[28766]: I0318 09:19:20.670183 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b778f3f5-3686-49f7-aa43-93a9d9d2d963/kube-rbac-proxy-web/0.log" Mar 18 09:19:20.690275 master-0 kubenswrapper[28766]: I0318 09:19:20.690241 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b778f3f5-3686-49f7-aa43-93a9d9d2d963/kube-rbac-proxy/0.log" Mar 18 09:19:20.712358 master-0 kubenswrapper[28766]: I0318 09:19:20.712314 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b778f3f5-3686-49f7-aa43-93a9d9d2d963/kube-rbac-proxy-thanos/0.log" Mar 18 09:19:20.728522 master-0 kubenswrapper[28766]: I0318 09:19:20.728438 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_b778f3f5-3686-49f7-aa43-93a9d9d2d963/init-config-reloader/0.log" Mar 18 09:19:20.753025 master-0 kubenswrapper[28766]: I0318 09:19:20.752966 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-8kgdq_d71aa1b9-6eb5-4331-b959-8930e10817b4/prometheus-operator/0.log" Mar 18 09:19:20.764747 master-0 kubenswrapper[28766]: I0318 09:19:20.764667 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-8kgdq_d71aa1b9-6eb5-4331-b959-8930e10817b4/kube-rbac-proxy/0.log" Mar 18 09:19:20.783789 master-0 kubenswrapper[28766]: I0318 09:19:20.783712 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-wkgdb_998cabe9-d479-439f-b1c0-1d8c49aefeb9/prometheus-operator-admission-webhook/0.log" Mar 18 09:19:20.811800 master-0 kubenswrapper[28766]: I0318 09:19:20.811739 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5d4d5995f-s5dw8_e5ae1886-f90c-49f4-bf08-055b55dd785a/telemeter-client/0.log" Mar 18 09:19:20.823954 master-0 kubenswrapper[28766]: I0318 09:19:20.823909 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5d4d5995f-s5dw8_e5ae1886-f90c-49f4-bf08-055b55dd785a/reload/0.log" Mar 18 09:19:20.838611 master-0 kubenswrapper[28766]: I0318 09:19:20.838485 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5d4d5995f-s5dw8_e5ae1886-f90c-49f4-bf08-055b55dd785a/kube-rbac-proxy/0.log" Mar 18 09:19:20.858470 master-0 kubenswrapper[28766]: I0318 09:19:20.858420 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59d6555497-hckn8_bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef/thanos-query/0.log" Mar 18 09:19:20.871792 master-0 kubenswrapper[28766]: I0318 09:19:20.871733 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59d6555497-hckn8_bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef/kube-rbac-proxy-web/0.log" Mar 18 09:19:20.883354 master-0 kubenswrapper[28766]: I0318 09:19:20.883307 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59d6555497-hckn8_bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef/kube-rbac-proxy/0.log" Mar 18 09:19:20.895482 master-0 kubenswrapper[28766]: I0318 09:19:20.895436 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59d6555497-hckn8_bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef/prom-label-proxy/0.log" Mar 18 09:19:20.909288 master-0 kubenswrapper[28766]: I0318 09:19:20.909237 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59d6555497-hckn8_bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef/kube-rbac-proxy-rules/0.log" Mar 18 09:19:20.924553 master-0 kubenswrapper[28766]: I0318 09:19:20.924492 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-59d6555497-hckn8_bcb0b60f-5cc3-4b4b-b209-3d89f2f349ef/kube-rbac-proxy-metrics/0.log" Mar 18 09:19:21.763161 master-0 kubenswrapper[28766]: I0318 09:19:21.763070 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lxj7x_ffc5379c-651f-490c-90f4-1285b9093596/kube-rbac-proxy/0.log" Mar 18 09:19:21.794275 master-0 kubenswrapper[28766]: I0318 09:19:21.794220 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lxj7x_ffc5379c-651f-490c-90f4-1285b9093596/cluster-autoscaler-operator/0.log" Mar 18 09:19:21.806141 master-0 kubenswrapper[28766]: I0318 09:19:21.806085 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/1.log" Mar 18 09:19:21.807290 master-0 kubenswrapper[28766]: I0318 09:19:21.807256 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/cluster-baremetal-operator/2.log" Mar 18 09:19:21.816716 master-0 kubenswrapper[28766]: I0318 09:19:21.816675 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-cf6qn_97730ec2-e6f1-4f8c-b85c-3c10623d06ce/baremetal-kube-rbac-proxy/0.log" Mar 18 09:19:21.831432 master-0 kubenswrapper[28766]: I0318 09:19:21.831376 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-z9n9c_d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/control-plane-machine-set-operator/0.log" Mar 18 09:19:21.831897 master-0 kubenswrapper[28766]: I0318 09:19:21.831828 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-z9n9c_d6fe8ee6-737e-438a-8d9d-1ec712f6bacf/control-plane-machine-set-operator/1.log" Mar 18 09:19:21.844453 master-0 kubenswrapper[28766]: I0318 09:19:21.844406 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-z6nw9_b9768e50-c883-47b0-b319-851fa53ac19a/kube-rbac-proxy/0.log" Mar 18 09:19:21.856714 master-0 kubenswrapper[28766]: I0318 09:19:21.856635 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-z6nw9_b9768e50-c883-47b0-b319-851fa53ac19a/machine-api-operator/0.log" Mar 18 09:19:22.396996 master-0 kubenswrapper[28766]: I0318 09:19:22.396800 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-vthq7_79b20ae6-1660-40b8-9a44-2a3989042d82/controller/0.log" Mar 18 09:19:22.416278 master-0 kubenswrapper[28766]: I0318 09:19:22.413241 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-vthq7_79b20ae6-1660-40b8-9a44-2a3989042d82/kube-rbac-proxy/0.log" Mar 18 09:19:22.441051 master-0 kubenswrapper[28766]: I0318 09:19:22.441005 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/controller/0.log" Mar 18 09:19:22.503075 master-0 kubenswrapper[28766]: I0318 09:19:22.503023 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/frr/0.log" Mar 18 09:19:22.514615 master-0 kubenswrapper[28766]: I0318 09:19:22.514570 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/reloader/0.log" Mar 18 09:19:22.530482 master-0 kubenswrapper[28766]: I0318 09:19:22.529607 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/frr-metrics/0.log" Mar 18 09:19:22.574963 master-0 kubenswrapper[28766]: I0318 09:19:22.574653 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/kube-rbac-proxy/0.log" Mar 18 09:19:22.598384 master-0 kubenswrapper[28766]: I0318 09:19:22.598335 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/kube-rbac-proxy-frr/0.log" Mar 18 09:19:22.613625 master-0 kubenswrapper[28766]: I0318 09:19:22.613567 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-frr-files/0.log" Mar 18 09:19:22.630332 master-0 kubenswrapper[28766]: I0318 09:19:22.630281 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-reloader/0.log" Mar 18 09:19:22.645682 master-0 kubenswrapper[28766]: I0318 09:19:22.645635 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-czkll_559d9b30-44e9-4cdd-8c46-7cab6e8f2285/cp-metrics/0.log" Mar 18 09:19:22.658573 master-0 kubenswrapper[28766]: I0318 09:19:22.658471 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nqlq6_54a7f143-e51f-475d-9c2d-21f1c3979705/frr-k8s-webhook-server/0.log" Mar 18 09:19:22.690458 master-0 kubenswrapper[28766]: I0318 09:19:22.690408 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65f5d58555-j282b_27eeeb04-faa9-4d56-81fa-a890a202cdd4/manager/0.log" Mar 18 09:19:22.704514 master-0 kubenswrapper[28766]: I0318 09:19:22.704458 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-88b68f8d8-w9g9k_f8b3af47-0f7b-422a-905a-0e3e139e2f7e/webhook-server/0.log" Mar 18 09:19:22.804882 master-0 kubenswrapper[28766]: I0318 09:19:22.804330 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g7bjn_8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc/speaker/0.log" Mar 18 09:19:22.817542 master-0 kubenswrapper[28766]: I0318 09:19:22.817495 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g7bjn_8f4f0018-7ed3-48a7-bb99-0e6de8fc38fc/kube-rbac-proxy/0.log" Mar 18 09:19:24.108808 master-0 kubenswrapper[28766]: I0318 09:19:24.108750 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-tj9b9_bd8ee9ae-4765-46bc-84d7-b5857fc3fb4a/cluster-node-tuning-operator/0.log" Mar 18 09:19:24.128737 master-0 kubenswrapper[28766]: I0318 09:19:24.128669 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-zzqc6_f826efe0-60a1-4465-b8d0-d4069ed507a1/tuned/0.log" Mar 18 09:19:24.816027 master-0 kubenswrapper[28766]: I0318 09:19:24.815937 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-s8bf2_530e8baf-e772-4beb-9e9c-62026f58fe64/prometheus-operator/0.log" Mar 18 09:19:24.831270 master-0 kubenswrapper[28766]: I0318 09:19:24.831185 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-599f646c7d-2xxkb_6cc17895-7455-4175-b335-898329eb83af/prometheus-operator-admission-webhook/0.log" Mar 18 09:19:24.849403 master-0 kubenswrapper[28766]: I0318 09:19:24.849341 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-599f646c7d-s6ngz_51a5655a-e87e-4e56-963d-83bdee4a2124/prometheus-operator-admission-webhook/0.log" Mar 18 09:19:24.880305 master-0 kubenswrapper[28766]: I0318 09:19:24.880248 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-gwzhl_76c81539-3333-4c7d-8dc0-5168188d910f/operator/0.log" Mar 18 09:19:24.897738 master-0 kubenswrapper[28766]: I0318 09:19:24.897693 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-69f4f7555f-6tjsm_e7184374-f735-4910-b013-4248e1c24f8a/perses-operator/0.log" Mar 18 09:19:26.301322 master-0 kubenswrapper[28766]: I0318 09:19:26.301251 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-jshg7_5982111d-f4c6-4335-9b40-3142758fc2bc/kube-apiserver-operator/1.log" Mar 18 09:19:26.309344 master-0 kubenswrapper[28766]: I0318 09:19:26.309280 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-jshg7_5982111d-f4c6-4335-9b40-3142758fc2bc/kube-apiserver-operator/0.log" Mar 18 09:19:26.940018 master-0 kubenswrapper[28766]: I0318 09:19:26.939969 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_1edfa49b-d0e7-4324-aace-b115b41ddae0/installer/0.log" Mar 18 09:19:26.962863 master-0 kubenswrapper[28766]: I0318 09:19:26.962723 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_e0d127be-2d13-449b-915b-2d49052baf02/installer/0.log" Mar 18 09:19:26.995966 master-0 kubenswrapper[28766]: I0318 09:19:26.995893 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_b5d596ea-c73d-4619-b3a5-fd52d3bebedd/installer/0.log" Mar 18 09:19:27.145618 master-0 kubenswrapper[28766]: I0318 09:19:27.145566 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver/0.log" Mar 18 09:19:27.154454 master-0 kubenswrapper[28766]: I0318 09:19:27.154412 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 18 09:19:27.166910 master-0 kubenswrapper[28766]: I0318 09:19:27.166829 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-regeneration-controller/0.log" Mar 18 09:19:27.181800 master-0 kubenswrapper[28766]: I0318 09:19:27.181746 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-insecure-readyz/0.log" Mar 18 09:19:27.197131 master-0 kubenswrapper[28766]: I0318 09:19:27.197089 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-check-endpoints/0.log" Mar 18 09:19:27.206365 master-0 kubenswrapper[28766]: I0318 09:19:27.206324 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/setup/0.log" Mar 18 09:19:27.899467 master-0 kubenswrapper[28766]: I0318 09:19:27.899409 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/kube-rbac-proxy/0.log" Mar 18 09:19:27.912268 master-0 kubenswrapper[28766]: I0318 09:19:27.912217 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/1.log" Mar 18 09:19:27.913193 master-0 kubenswrapper[28766]: I0318 09:19:27.913152 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-phjp8_43fbd379-dd1e-4287-bd76-fd3ec51cde43/manager/2.log" Mar 18 09:19:27.940767 master-0 kubenswrapper[28766]: I0318 09:19:27.940714 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-gn68n_427e8f18-69c0-461d-8322-cb64dd0ad33f/cert-manager-controller/0.log" Mar 18 09:19:27.953068 master-0 kubenswrapper[28766]: I0318 09:19:27.953017 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-69pmp_ed34c608-6097-46a6-9539-3308a0526860/cert-manager-cainjector/0.log" Mar 18 09:19:27.964962 master-0 kubenswrapper[28766]: I0318 09:19:27.964868 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-8ds5h_a4101d92-1e4e-48a5-af55-6388661e3800/cert-manager-webhook/0.log" Mar 18 09:19:28.498355 master-0 kubenswrapper[28766]: I0318 09:19:28.498307 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-gn68n_427e8f18-69c0-461d-8322-cb64dd0ad33f/cert-manager-controller/0.log" Mar 18 09:19:28.514601 master-0 kubenswrapper[28766]: I0318 09:19:28.514546 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-69pmp_ed34c608-6097-46a6-9539-3308a0526860/cert-manager-cainjector/0.log" Mar 18 09:19:28.534279 master-0 kubenswrapper[28766]: I0318 09:19:28.534231 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-8ds5h_a4101d92-1e4e-48a5-af55-6388661e3800/cert-manager-webhook/0.log" Mar 18 09:19:29.034948 master-0 kubenswrapper[28766]: I0318 09:19:29.034894 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-tv685_d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf/nmstate-console-plugin/0.log" Mar 18 09:19:29.050605 master-0 kubenswrapper[28766]: I0318 09:19:29.050559 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sngqk_f17c8a3e-2a67-4ca4-80d6-ae4177b03359/nmstate-handler/0.log" Mar 18 09:19:29.065330 master-0 kubenswrapper[28766]: I0318 09:19:29.065288 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-882nf_0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae/nmstate-metrics/0.log" Mar 18 09:19:29.084328 master-0 kubenswrapper[28766]: I0318 09:19:29.084266 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-882nf_0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae/kube-rbac-proxy/0.log" Mar 18 09:19:29.108327 master-0 kubenswrapper[28766]: I0318 09:19:29.107533 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-p6fqw_c41ac234-9a6f-410f-b4f1-1825ada66e14/nmstate-operator/0.log" Mar 18 09:19:29.130651 master-0 kubenswrapper[28766]: I0318 09:19:29.130597 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-6r6gh_2de37539-f3d7-47cd-a12e-4285ac38f0db/nmstate-webhook/0.log" Mar 18 09:19:29.714489 master-0 kubenswrapper[28766]: I0318 09:19:29.714441 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpzrz_f9fa104a-4979-4023-8d7e-a965f11bc7db/kube-multus-additional-cni-plugins/0.log" Mar 18 09:19:29.730216 master-0 kubenswrapper[28766]: I0318 09:19:29.730158 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpzrz_f9fa104a-4979-4023-8d7e-a965f11bc7db/egress-router-binary-copy/0.log" Mar 18 09:19:29.741337 master-0 kubenswrapper[28766]: I0318 09:19:29.741281 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpzrz_f9fa104a-4979-4023-8d7e-a965f11bc7db/cni-plugins/0.log" Mar 18 09:19:29.754776 master-0 kubenswrapper[28766]: I0318 09:19:29.754742 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpzrz_f9fa104a-4979-4023-8d7e-a965f11bc7db/bond-cni-plugin/0.log" Mar 18 09:19:29.773041 master-0 kubenswrapper[28766]: I0318 09:19:29.772998 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpzrz_f9fa104a-4979-4023-8d7e-a965f11bc7db/routeoverride-cni/0.log" Mar 18 09:19:29.787057 master-0 kubenswrapper[28766]: I0318 09:19:29.787011 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpzrz_f9fa104a-4979-4023-8d7e-a965f11bc7db/whereabouts-cni-bincopy/0.log" Mar 18 09:19:29.801699 master-0 kubenswrapper[28766]: I0318 09:19:29.801651 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-xpzrz_f9fa104a-4979-4023-8d7e-a965f11bc7db/whereabouts-cni/0.log" Mar 18 09:19:29.817059 master-0 kubenswrapper[28766]: I0318 09:19:29.817011 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-zgrts_e0bb044f-5a4e-4981-8084-91348ce1a56a/multus-admission-controller/0.log" Mar 18 09:19:29.833416 master-0 kubenswrapper[28766]: I0318 09:19:29.833374 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-zgrts_e0bb044f-5a4e-4981-8084-91348ce1a56a/kube-rbac-proxy/0.log" Mar 18 09:19:29.927648 master-0 kubenswrapper[28766]: I0318 09:19:29.927575 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bpf5c_fd692e8e-9a41-4a6d-abbe-ef8e28b8b2a4/kube-multus/0.log" Mar 18 09:19:29.957456 master-0 kubenswrapper[28766]: I0318 09:19:29.957400 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-6x85n_d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/network-metrics-daemon/0.log" Mar 18 09:19:29.972745 master-0 kubenswrapper[28766]: I0318 09:19:29.972471 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-6x85n_d858bfd6-e69b-4c93-a6d7-95cc0fc3ca29/kube-rbac-proxy/0.log" Mar 18 09:19:30.502778 master-0 kubenswrapper[28766]: I0318 09:19:30.502726 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_lvms-operator-5bdfbd4c57-vn2r8_22e6caaa-74bd-42d6-b2b6-21900a13bbb8/manager/0.log" Mar 18 09:19:30.531794 master-0 kubenswrapper[28766]: I0318 09:19:30.531744 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-h82tw_91e21746-efd7-40be-98d2-e4ef28aa2713/vg-manager/1.log" Mar 18 09:19:30.535460 master-0 kubenswrapper[28766]: I0318 09:19:30.535416 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-h82tw_91e21746-efd7-40be-98d2-e4ef28aa2713/vg-manager/0.log" Mar 18 09:19:31.059766 master-0 kubenswrapper[28766]: I0318 09:19:31.059702 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_28d2bb97-ff93-4772-96fd-318fa62e3a87/installer/0.log" Mar 18 09:19:31.088108 master-0 kubenswrapper[28766]: I0318 09:19:31.088041 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_62a1fcda-ce2f-4d14-ab37-10a21e30fc30/installer/0.log" Mar 18 09:19:31.111203 master-0 kubenswrapper[28766]: I0318 09:19:31.111165 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_3068e569-5a4e-4fc3-88f4-5684d093c8e6/installer/0.log" Mar 18 09:19:31.132458 master-0 kubenswrapper[28766]: I0318 09:19:31.132399 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-5-master-0_0ac062ca-3c0f-4695-88f9-429c01f79169/installer/0.log" Mar 18 09:19:31.327004 master-0 kubenswrapper[28766]: I0318 09:19:31.326796 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_1b0af84e08c0ebb6ef970331bd9379be/kube-controller-manager/0.log" Mar 18 09:19:31.372045 master-0 kubenswrapper[28766]: I0318 09:19:31.371927 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_1b0af84e08c0ebb6ef970331bd9379be/cluster-policy-controller/0.log" Mar 18 09:19:31.382653 master-0 kubenswrapper[28766]: I0318 09:19:31.382611 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_1b0af84e08c0ebb6ef970331bd9379be/kube-controller-manager-cert-syncer/0.log" Mar 18 09:19:31.395179 master-0 kubenswrapper[28766]: I0318 09:19:31.395118 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_1b0af84e08c0ebb6ef970331bd9379be/kube-controller-manager-recovery-controller/0.log" Mar 18 09:19:32.016960 master-0 kubenswrapper[28766]: I0318 09:19:32.016893 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-fxn82_260c8aa5-a288-4ee8-b671-f97e90a2f39c/kube-controller-manager-operator/1.log" Mar 18 09:19:32.020827 master-0 kubenswrapper[28766]: I0318 09:19:32.020788 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-fxn82_260c8aa5-a288-4ee8-b671-f97e90a2f39c/kube-controller-manager-operator/0.log" Mar 18 09:19:33.124148 master-0 kubenswrapper[28766]: I0318 09:19:33.124092 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c6fb9336-3f19-4220-93ee-a5a61e26340b/installer/0.log" Mar 18 09:19:33.143604 master-0 kubenswrapper[28766]: I0318 09:19:33.143556 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-6-master-0_bfb95119-ed96-428c-8a9b-7e29f48b5d4b/installer/0.log" Mar 18 09:19:33.180553 master-0 kubenswrapper[28766]: I0318 09:19:33.180498 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_11a2f93448b9d54da9854663936e2b73/kube-scheduler/0.log" Mar 18 09:19:33.193168 master-0 kubenswrapper[28766]: I0318 09:19:33.193121 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_11a2f93448b9d54da9854663936e2b73/kube-scheduler-cert-syncer/0.log" Mar 18 09:19:33.209310 master-0 kubenswrapper[28766]: I0318 09:19:33.209270 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_11a2f93448b9d54da9854663936e2b73/kube-scheduler-recovery-controller/0.log" Mar 18 09:19:33.227230 master-0 kubenswrapper[28766]: I0318 09:19:33.227183 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_11a2f93448b9d54da9854663936e2b73/wait-for-host-port/0.log" Mar 18 09:19:33.244441 master-0 kubenswrapper[28766]: I0318 09:19:33.244379 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_revision-pruner-6-master-0_0fcf6eb0-f4dd-41dd-86ee-9bcb9996546d/pruner/0.log" Mar 18 09:19:33.544574 master-0 kubenswrapper[28766]: I0318 09:19:33.544453 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-tv685_d8cbb83c-f7ff-44b0-afe0-dca20fab3ebf/nmstate-console-plugin/0.log" Mar 18 09:19:33.565841 master-0 kubenswrapper[28766]: I0318 09:19:33.565801 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sngqk_f17c8a3e-2a67-4ca4-80d6-ae4177b03359/nmstate-handler/0.log" Mar 18 09:19:33.576190 master-0 kubenswrapper[28766]: I0318 09:19:33.576142 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-882nf_0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae/nmstate-metrics/0.log" Mar 18 09:19:33.592448 master-0 kubenswrapper[28766]: I0318 09:19:33.592410 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-882nf_0a7f9328-e0f5-4f65-83a4-0d5d76b9a1ae/kube-rbac-proxy/0.log" Mar 18 09:19:33.609246 master-0 kubenswrapper[28766]: I0318 09:19:33.609197 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-p6fqw_c41ac234-9a6f-410f-b4f1-1825ada66e14/nmstate-operator/0.log" Mar 18 09:19:33.651423 master-0 kubenswrapper[28766]: I0318 09:19:33.651377 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-6r6gh_2de37539-f3d7-47cd-a12e-4285ac38f0db/nmstate-webhook/0.log" Mar 18 09:19:33.963554 master-0 kubenswrapper[28766]: I0318 09:19:33.963508 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-9p4bb_8a6ab2be-d018-4fd5-bfbb-6b88aec28663/kube-scheduler-operator-container/1.log" Mar 18 09:19:33.967298 master-0 kubenswrapper[28766]: I0318 09:19:33.967250 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-9p4bb_8a6ab2be-d018-4fd5-bfbb-6b88aec28663/kube-scheduler-operator-container/0.log" Mar 18 09:19:34.529299 master-0 kubenswrapper[28766]: I0318 09:19:34.529249 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-8487694857-ld5l8_8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8/migrator/0.log" Mar 18 09:19:34.539689 master-0 kubenswrapper[28766]: I0318 09:19:34.539626 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-8487694857-ld5l8_8ebaeb8d-8fbd-4638-9516-fc4e90ba2fa8/graceful-termination/0.log" Mar 18 09:19:35.029604 master-0 kubenswrapper[28766]: I0318 09:19:35.029545 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg_ec11012b-536a-422f-afc4-d2d0fd4b67fb/kube-storage-version-migrator-operator/1.log" Mar 18 09:19:35.030902 master-0 kubenswrapper[28766]: I0318 09:19:35.030836 28766 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-s7szg_ec11012b-536a-422f-afc4-d2d0fd4b67fb/kube-storage-version-migrator-operator/0.log"